[ADMB Users] SE Estimates including process error in ADMB
Chris Gast
cmgast at gmail.com
Tue Mar 15 22:28:59 PDT 2011
For anyone who may be following along, I believe I may have found the
answer. By combining my two previous posts (where I try to estimate var(p1)
and var(g)), I was able to verify ADMB's estimate of var(p1), however this
has lead me to discover that the printed estimated in the .std and .cor
files is incorrect.
This is what I did:
While estimating var(g) [where g=exp(t1)], I used the delta method and
obtained a number that disagreed with ADMB's estimate. I hypothesized that
this was because the printed estimate for SE(t1) in the .std and .cor file
was only the first of the two components from eqn 3.1 of the ADMB-RE manual.
That is, I hypothesized that ADMB printed SE(t1), but it actually used eqn
3.1 when using the delta method to compute variances of functions involving
t1.
So SE(g) was estimated in ADMB as .057166 (see prior email), and my
hypothesis was that
SE(g) \approx sqrt{ exp(2*t1)*(var(t1) + X)}
where X = du/dtheta^T * Cov(theta) * du/dtheta is the second term of eqn
3.1, which was not included in the SE column for t1 in the .cor and .std
files.
Given all of the other parts in the equation for SE(g) in the equation
above, I solved for X, and found, in this case, that X=.003009676. When I
recompute
Var(t1) = ADMB's printed var(t1) + .003009676, and then redo the delta
method myself, I obtain the exact same variance estimate for p1 as ADMB
does.
Since I can't output the sensitivity matrix (du/dtheta) myself, I must
assume that the second component of eqn 3.1 in the ADMB RE manual for Cov(u)
is computed as prescribed (that is, that .003009676 is the correct number).
What this probably-confusing discussion amounts to is that the printed
variance estimates for RE terms appears to only account for the
inverse-Hessian component of variability, and do not by default add the
contribution of variability due to estimation of the fixed parameters. This
does not seem to be a problem when adding your own functions to ADMB code as
sdreport_ variables, as ADMB still uses both components in the delta method
variance approximation. However, this is problematic when someone such as
myself needs to specify a sampling distribution for portions of such a
function. My example with p1 is going to be extended to use a
Horvitz-Thompson estimation approach to compute N1=x1/p1, where I can
estimate the variability for p1 within ADMB, but I would like to specify the
sampling distribution for x1 outside of ADMB (in some post-processing R
code).
I'm a modestly capable C/C++ programmer, and could probably spend some time
digging around in the source to figure out how to print both components of
the variance estimate for REs (I've dabbled in the source a bit already to
change the precision of printed values), but I suspect there is a much more
knowledgeable individual who might be able (and willing?) to accomplish
this.
Is there a specific reason that printed SE values for REs omit this
component? Would it not be better if, by default, the printed value for
SE(t1) was equal to the square root of eqn 3.1 in the manual---or perhaps if
both components were available in some output file?
Chris Gast
-----------------------------
Chris Gast
cmgast at gmail.com
On Tue, Mar 15, 2011 at 5:07 PM, Chris Gast <cmgast at gmail.com> wrote:
> Perhaps more simply, if I add the function (as a sdreport_number variable)
>
> g = exp(t1)
>
> where t1 is a RE term, ADMB computes
>
> g = exp(0.032993) = 0.0057166
>
> and estimates (in the .std and .cor files)
>
> SE(g) = 0.057166
>
> where a standard delta-method approximation would yield
>
> Var(g) \approx (dg/dt1)^2 * var(t1) = exp(2*0.032993) * (0.0070425)^2 =
> 5.297957e-05 --> SE(g) = 0.007278707
>
> So clearly, I must be doing something wrong, or have some misconception
> about what is reported in the .std and .cor files for RE terms. Any
> thoughts?
>
>
>
> Thanks,
>
> Chris Gast
>
>
>
>
>
> -----------------------------
> Chris Gast
> cmgast at gmail.com
>
>
> On Tue, Mar 15, 2011 at 12:41 PM, Chris Gast <cmgast at gmail.com> wrote:
>
>> Hello again,
>>
>>>
>> I'm still not sure what I think ADMB is doing and what it is actually
>> doing are the same thing, and I think it's related to the second term on the
>> rhs of eqn (3.1) in the ADMB-RE manual.
>>
>>>
>> As an example, I have a function
>>
>>>
>> p1 = 1-exp((cmu+t1)*f1)/(1+exp((cmu+t1)*f1))
>>
>>>
>> where cmu is a parameter, f1 is a datum, and t1 is a random effect. I have
>> added the function p1 as a sdreport_number in ADMB, and am trying to
>> recompute the variance estimate via the estimates in the .cor file myself (I
>> need to know what ADMB is doing, so I can use these estimates in additional
>> approximations). I cannot seem to recreate ADMB's variance estimate for p1.
>> I have attached a trimmed version of the .cor file (trimmed to relevant
>> portions to keep file size down) and some commented R code wherein I'm
>> trying to verify the SE estimate for p1. Would anyone who be willing to
>> check my variance approximation, and see what I'm leaving out with respect
>> to what ADMB is doing? Either I've made a dumb mistake, or ADMB is not doing
>> what I think it's doing.
>>
>>
>> Calculations like this seem to work (ADMB output can be verified) if RE
>> terms are omitted, such as the meanp parameter function (noted in the R and
>> .cor file), which is the same as p1 but with t1=0 and f1=1.4; in addition,
>> my variance estimate for functions with REs tend to be lower than those from
>> ADMB. These two facts may indicate that I probably don't handle the second
>> term in eqn (3.1) of the ADMB-RE manual correctly. My understanding is that
>> the SE value for a RE term (such as t1) is already computed as in eqn (3.1)
>> in the ADMB-RE manual, and that is what is presented in the .std and .cor
>> files. Perhaps this is not the case? Or perhaps there is something wrong
>> with backwards-computing Cov(cmu,t1) = corr(cmu,t1)*SE(cmu)*SE(t1), where
>> the three terms on the rhs are all found in the .cor file? I could do some
>> testing of these theories if I could obtain the individual VCV matrix
>> components from Dave Fournier's previous messages on this subject, but I
>> don't see how that can be done (besides L_xx^{-1}, of course).
>>
>>>
>> Thanks very much,
>>
>>>
>> Chris Gast
>>
>>
>>
>>
>>
>>
>> -----------------------------
>>
>>> Chris Gast
>>
>>> cmgast at gmail.com
>>
>>>
>>>
>>> On Tue, Nov 16, 2010 at 4:21 AM, dave fournier <davef at otter-rsch.com>wrote:
>>>
>>>> Chris Gast wrote:
>>>>
>>>> I think this is discussed in the paper Hans and I wrote.
>>>> anyway since uhat(x) is the value of uhat which maximizes L(x,u)
>>>> It follows that
>>>>
>>>> L_u(x,uhat(x))=0 for all x so differentiating wrt x you get
>>>>
>>>> L_xu + L_uu uhat'(x)=0 so that
>>>>
>>>> uhat'(x) = - L_uu^{-1} * L_xu
>>>>
>>>> If there are n x's and m u's then uhat'(x) is an n x m matrix.
>>>>
>>>> Thanks again, Dave. Could you clarify what the general entry of the
>>>>> gradient uhat'(xhat) looks like? I had thought this would be dx_i/du_j, but
>>>>> I think these would all be 1's or 0's (since, for instance , Shat_1 = shat +
>>>>> uhat_1). It is likely that I am misunderstanding something.
>>>>>
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Chris
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> -----------------------------
>>>>> Chris Gast
>>>>> cmgast at gmail.com <mailto:cmgast at gmail.com>
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 15, 2010 at 7:00 PM, dave fournier <davef at otter-rsch.com<mailto:
>>>>> davef at otter-rsch.com>> wrote:
>>>>>
>>>>> OK,
>>>>>
>>>>> there are inly ar efew thing to play with. for notation let
>>>>>
>>>>> F(x,u) be the likelihood for x and u.
>>>>>
>>>>> let
>>>>>
>>>>> L(x) = int F(x,u) du
>>>>>
>>>>> be what we get after integrating out u either exactly or by the
>>>>> laplace approx.
>>>>>
>>>>> and let xhat uhat(xhat) be the miximizing values.
>>>>>
>>>>> We have the Hessians L_xx(xhat), and F_uu(xhat,uhat(xhat)
>>>>> and the gradients uhat'(xhat)
>>>>> Then the covariance matrix for x,u is assumed to be
>>>>>
>>>>> L_xx^{-1} L_xx^{-1} * uhat'(x)
>>>>>
>>>>> uhat'(xhat)*L_xx^{-1} F_uu^{-1} +
>>>>> uhat'(xhat)*L_xx^{-1}*uhat'(x)
>>>>>
>>>>> The idea is that u-uhat(xhat) is independent of xhat i.e the only
>>>>> correlation is between
>>>>> xhat and uhat(x)
>>>>>
>>>>> Put in transposes where necessary.
>>>>>
>>>>> For any function of x,u the covariance is computed by the delta
>>>>> method.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Users mailing list
>>>>> Users at admb-project.org <mailto:Users at admb-project.org>
>>>>>
>>>>> http://lists.admb-project.org/mailman/listinfo/users
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.admb-project.org/pipermail/users/attachments/20110315/2cd3fb48/attachment.html>
More information about the Users
mailing list