温莎日记 28

Let's try to understand why MLEs are 'good':

If I get more and more data, I can uncover the truth.

Law of Large Numbers:

TT the U.S. physician: chenpengaizhongguo@126.com.

If distribution of the i.i.d. sample X_1,...,X_n is such that X_1 has a finite expectation, i.e.\vert EX_1\vert < ∞, then the sample average 

\tilde{X_n} =\frac{X_1+...+X_n}{n}  \rightarrow EX_1

converges to its expectation in probability, which means that for any arbitrarily small\varepsilon >0

P(\vert \tilde{X} -EX_1 \vert >\epsilon )\rightarrow 0 asn\rightarrow ∞.

Note. Whenever we will use the LLN below we will simply say that the average converges to its expectation and will not mention in what sense. More mathematically inclined clients are welcome to carry out these steps more rigorously, especially when we use LLN in combination with the Central Limit Theorem.

Central Limit Theorem:

If distribution of the i.i.d. sample X_1,...,X_n is such that X_1 has finite expectation and variance, i.e. \vert EX_1 \vert <∞ and \sigma ^2=Var(X)<∞, then

\sqrt{n} (\tilde{X_n} -EX_1 ) {\to d} N(0,\sigma ^2)

converges in distribution to normal distribution with zero mean and variance \sigma ^2, which means that for any interval [a,b],

P(\sqrt{n}(\tilde{X}_n - EX_1 ) \in [a,b])\rightarrow \int_{a}^{b} \frac{1}{\sqrt{2\pi } \sigma } e^{-\frac{x^2}{2\sigma ^2} } dx.

In other words, the random variable \sqrt{n} (\tilde{X}_n -EX_1 ) will behave like a random variable from normal distribution when n gets large.

We will prove that MLE satisfies usually the following two properties called consistency and asymptotic normality.

1. Consistency. We say that an estimate \hat{\theta }  is consistent if \hat{\theta } \rightarrow \theta _0 in probability as n \rightarrow  ∞, where \theta _0 is the 'true' unknown parameter of the distribution of the sample.

2. Asymptotic Normality. We say that \hat{\theta }  is asymptotically normal if 

\sqrt{n} (\hat{\theta }-\theta _0 ){\to d} N(0,\sigma _{\theta _0}^2 )

where \sigma _{\theta _0}^2 is called the asymptotic variance of the estimate \hat{\theta } . Asymptotic normality says that the estimator not only converges to the unknown parameter but also converges fast enough at rate 1/\sqrt{n} .

Consistency of MLE:

Suppose that the data X_{1:n} is generated from a distribution with unknown parameter \theta _0 and \hat{\theta }  converges to the unknown parameter \theta _0? This is not immediately obvious and we will give a sketch of why this happens.

First of all, MLE \hat{\theta }  is the maximizer of L_n(\theta )=\frac{1}{n} \sum_{i=1}^n logf(X_i|\theta )  which is a log-likelihood function normalized by \frac{1}{n} .  Notice that function L_n(\theta ) depends on data. Let us consider a function l(X|\theta )=logf(X|\theta ) and define L(\theta )=E_{\theta _0}l(X|\theta ), where E_{\theta _0} denotes the expectation with respect to the true unknown parameter \theta _0 of the sample X_{1:n}.  

If we deal with continuous distributions then L(\theta )=\int(logf(x|\theta ))f(x|\theta _0)dx.  

By law of large numbers, for any \theta L_n(\theta )\rightarrow E_{\theta _0}l(X|\theta )=L(\theta ).  Note that L(\theta ) this does not depend on the sample, it only depends on \theta . We will need the following. 

Lemma. We have that for any \theta L(\theta )\leq L(\theta _0).  Moreover, the inequality is strict, L(\theta )<L(\theta _0), unless P_{\theta _0}(f(X|\theta )=f(X|\theta _0))=1, which means that P_\theta =P_{\theta_0}.

Proof. Let us consider the difference

L(\theta )-L(\theta_0)=E_{\theta _0}(logf(X|\theta )-logf(X|\theta _0))=E_{\theta _0}log\frac{f(X|\theta )}{f(X|\theta _0)} .

Since logt\leq t-1, we can write 

E_{\theta _0}log\frac{f(X|\theta )}{f(X|\theta _0)} \leq  E_{\theta _0}(\frac{f(X|\theta )}{f(X|\theta _0)} -1)=\int(\frac{f(x|\theta )}{f(x|\theta _0)} -1)f(x|\theta _0)dx

=\int f(x|\theta)dx-\int f(x|\theta _0)dx=1-1=0.

Both integrals are equal to 1 because we are integrating the probability density function. This proves that L(\theta )-L(\theta _0) \leq 0.  The second statement of Lemma is also clear. We will use this Lemma to sketch the consistency of the MLE.

Theorem. Under some regularity conditions on the family of distributions, MLE \hat{\theta }  is consistent, i.e. \hat{\theta } \rightarrow  \theta _0 as n\rightarrow  ∞.

Proof. We have the following facts:

1) \hat{\theta }  is the maximizer of L_n(\theta ) by definition.

2) \theta _0 is the maximizer of L(\theta ) by Lemma.

3) \forall  \theta  we have L_\theta (\theta ) \rightarrow L(\theta ) by LLN.

Asymptotic normality of MLE, Fisher information.

We want to show the asymptotic normality of MLE, i.e. to show that 

\sqrt{n}(\hat{\theta } -\theta _0)  {\to d}  N(0,\sigma ^2_{MLE}) for some \sigma ^2_{MLE} and compute it. 

This asymptotic variance in some sense measures the quality of MLE. First, we need to introduce the notion called Fisher Information.

Let us recall that above we defined the function l(X|\theta )=logf(X|\theta ). To simplify the notations we will denote by l'(X|\theta )l''(X|\theta ), etc. the derivatives of l(X|\theta ) with respect to \theta

Definition. (Fisher Information) Fisher information of a random variable X with distribution P_{\theta _0} from the family \left\{ P_\theta : \theta \in \Theta  \right\}  is defined by 

I(\theta _0)=E_{\theta _0}(l'(X|\theta _0)^2\equiv E_{\theta _0}(\frac{\partial}{\partial \theta } logf(X|\theta )|_{\theta =\theta _0})^2.

Theorem. (Asymptotic normality of MLE) We have, \sqrt{n}(\hat{\theta }-\theta _0 ) \rightarrow N(0,\frac{1}{I(\theta _0)} ).

Example. The family of Bernoulli distributions B(p) has p.f. f(x|p)=p^x(1-p)^{1-x} and taking the logarithm logf(x|p)=xlogp + (1-x)log(1-p). The second derivative with respect to parameter p is 

\frac{\partial}{\partial p} logf(x|p)=\frac{x}{p}-\frac{1-x}{1-p}  \frac{\partial^2}{\partial p^2} logf(x|p)=-\frac{x}{p^2}-\frac{1-x}{(1-p)^2}

Then the Fisher information can be computed as

I(p)=-E\frac{\partial^2}{\partial p^2} logf(X|p)=\frac{EX}{p^2} +\frac{1-EX}{(1-p)^2} =\frac{p}{p^2} +\frac{1-p}{(1-p)^2} =\frac{1}{p(1-p)} .

The MLE of p is \hat{p}=\tilde{X}   and the asymptotic normality result states that 

\sqrt{n}(\hat{p} -p_0) \rightarrow N(0,p_0(1-p_0)) which, of course, also follows directly from the CLT.

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容