<div>Thanks for the response. The variance estimate seems like a reasonable thing to propose.... I think it might be the same idea that is derived in this paper except that they work with the posterior mean rather than maximizer of the posterior:</div>
<div><br></div>Hobert JP, Booth JG. Standard Errors of Prediction in Generalized Linear Mixed Models. <i>Journal of the American Statistical Association</i>. 1998;93(441):262-272.<div><br><br><div class="gmail_quote">On Fri, May 7, 2010 at 11:05 AM, dave fournier <span dir="ltr"><<a href="mailto:otter@otter-rsch.com">otter@otter-rsch.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">It came out of my head.<br>
<br>
Reasoning is as follows. Let the RE's be u and the other parameters<br>
be x.<br>
<br>
Let uhat(x) be the value of u which maximizes the function<br>
(joint probability dist if you are a Bayesian) l(x,u)<br>
of x and u for a given value of x. Then the delta method gives the<br>
estimate for the variance of uhat as<br>
<br>
trans(uhat'(x)) * inv(log(L)_xx)* uhat'(x)<br>
<br>
where L(x) = int l(x,u) du<br>
<br>
If uhat(x) is known then a candidate for the variance of u would be<br>
<br>
inv(log(l)_uu)<br>
<br>
so add them together to reflect the fact that uncertainly in the<br>
x gives uncertainty in the value of uhat.<br>
So it just seems like a reasonable calculation. Of course in nonlinear<br>
models this approximation can be quite bad in more extreme cases.<br>
<div><div></div><div class="h5">_______________________________________________<br>
Users mailing list<br>
<a href="mailto:Users@admb-project.org">Users@admb-project.org</a><br>
<a href="http://lists.admb-project.org/mailman/listinfo/users" target="_blank">http://lists.admb-project.org/mailman/listinfo/users</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br>Ian Fiske<br>PhD Candidate<br>Department of Statistics<br>North Carolina State University<br>
</div>