[ADMB Users] citation for empirical Bayes variance computation

Ian Fiske ianfiske at gmail.com
Mon May 10 18:59:36 PDT 2010


Thanks for the response.  The variance estimate seems like a reasonable
thing to propose.... I think it might be the same idea that is derived in
this paper except that they work with the posterior mean rather than
maximizer of the posterior:

Hobert JP, Booth JG. Standard Errors of Prediction in Generalized Linear
Mixed Models. *Journal of the American Statistical Association*.
1998;93(441):262-272.


On Fri, May 7, 2010 at 11:05 AM, dave fournier <otter at otter-rsch.com> wrote:

> It came out of my head.
>
> Reasoning is as follows. Let the RE's be u and the other parameters
> be x.
>
> Let uhat(x) be the value of u which maximizes the function
> (joint probability dist if you are a Bayesian) l(x,u)
> of x and u for a given value of x. Then the delta method gives the
> estimate for the variance of uhat as
>
>           trans(uhat'(x)) * inv(log(L)_xx)* uhat'(x)
>
> where L(x) = int l(x,u) du
>
> If uhat(x) is known then  a candidate for the variance of u would be
>
>         inv(log(l)_uu)
>
> so add them together to reflect the fact that uncertainly in the
> x gives uncertainty in the value of uhat.
> So it just seems like a reasonable calculation. Of course in nonlinear
> models this approximation can be quite bad in more extreme cases.
> _______________________________________________
> Users mailing list
> Users at admb-project.org
> http://lists.admb-project.org/mailman/listinfo/users
>



-- 
Ian Fiske
PhD Candidate
Department of Statistics
North Carolina State University
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.admb-project.org/pipermail/users/attachments/20100510/73307a54/attachment.html>


More information about the Users mailing list