[ADMB Users] Restricting magnitude of random effects estimates, achieving convergence of RE models

Chris Gast cmgast at gmail.com
Tue Aug 3 14:27:50 PDT 2010


Hello again,

I'm simulating age-at-harvest data (and accompanying effort data) and trying
to fit a series of 12 models, the most complex of which contains 3 random
effects vectors (all normally-distributed).  I'm varying the dimensionality
of the problem, but my current scenario involves random effects vectors of
dimension ~25.  There are also approximately 15 to 40 fixed parameters (6 of
which are means and standard deviations corresponding to the random effects
vectors).

A frequent problem I've encountered is that during estimation, ADMB often
elevates the magnitude of random effects estimates such that the objective
function value enters NaN territory, from which it cannot recover.  I've
tried using random_effects_bounded_vectors, but this frequently leads to
optimization failure ("hessian does not appear to be positive definite"),
regardless of the magnitude of the limits I impose.  I've concocted a
penalty function that helps alleviate this problem (most of the time): Prior
to multiplying the log-likelihood by -1, I subtract 10 times the sum of
squared random effects estimates.  In code, this looks like:

....previous log-likelihood computations....

sumt=0;
for(i=0;i<nyears;i++){
sumt=sumt+t[i]*t[i];
}
totL -= sumt*10;

totL *= -1;

where t is defined as a random_effects_vector, sumt is a dvariable, and totL
is the objective function value.  Sometimes a value of 10 works, and
sometimes an unreasonable (but equally arbitrary) value of 100,000 is
necessary to obtain convergence.

Prior to this code, I use the usual

totL += -(nyears)*log(csigma)-.5*norm2(t/csigma);

or alternatively

totL  += -.5*norm2(t);
tt = csigma*t;

with appropriate definitions for the variance parameter csigma, and t and
tt.  I'll also note that each of the random effects occurs within either an
exponential or logistic transformation of some demographic process.

Of course, the higher the arbitrary scale factor (10 - 100,000), the greater
restriction I am placing on the variance parameter, csigma.  This is a
parameter of some interest for me, and I don't want to limit its range.

I'm willing to accept that some models will fail to fit, particularly
because many models are simplifications of the true simulation model.  The
problem is that to obtain a reasonable number of "successful" simulations, I
need to limit the failure rate of such models.

Does anyone have some experience with such a problem that they'd be willing
to share? How have others dealt with problems of this nature?  Is there some
customary penalty function of which I'm unaware?



Thanks very much,

Chris Gast
University of Washington
Quantitative Ecology and Resource Management








-----------------------------
Chris Gast
cmgast at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.admb-project.org/pipermail/users/attachments/20100803/b17e659a/attachment.html>


More information about the Users mailing list