[ADMB Users] Restricting magnitude of random effects estimates, achieving convergence of RE models

Chris Gast cmgast at gmail.com
Wed Aug 4 11:02:26 PDT 2010


Thanks, Hans, for your ideas.  I had a considered such a logistic
transformation, but hoped there might be a different solution.  I'll give
that a try, and report back.

I do bound variance parameters, but when my random effect estimates become
very large, so do my sigma parameters---they end up bouncing off the upper
boundary (regardless of magnitude), and non-convergence is then observed.

Fading-out the penalty function in phases is, unfortunately, not really an
option, because it is the final phase (the SD parameter and RE estimation
phase) during which the problems occur.

I suppose I'm a bit confused about the -noinit option.  Shouldn't ADMB use
the previous RE estimates as the starting point for the next optimization by
default?  Perhaps I'm misunderstanding something.

Thanks again,

Chris




-----------------------------
Chris Gast
cmgast at gmail.com


On Tue, Aug 3, 2010 at 11:09 PM, H. Skaug <hskaug at gmail.com> wrote:

> Chris,
>
> Thanks for raising a point of general interest.
>
> random_effects_bounded_vectors are not working. That is the reason
> it is not mentioned in the RE manual (I hope). I agree that it would be
> useful to have. In its absense we have to use more ad hoc techniques.
>
> One possibility is to obtain boundedness via transformation
> t_b = a + b*exp(t)/(1+exp(t)) and to chose a and b properly.
>
> Personally I prefer to put bounds on the variance parameters (csigma)
> rather than using the penalties. The latter destorts the interpretation
> of the random effects.
>
> Phases could be useful here. Your penalty could "fade out" with increasing
> phase.
>
> A separate thing that may help is the "-noinit" command line option
> which makes admb use the previous RE estimate the starting point
> for the next optimization.
>
> Hans
>
>
>
>
> On Tue, Aug 3, 2010 at 11:27 PM, Chris Gast <cmgast at gmail.com> wrote:
> > Hello again,
> > I'm simulating age-at-harvest data (and accompanying effort data) and
> trying
> > to fit a series of 12 models, the most complex of which contains 3 random
> > effects vectors (all normally-distributed).  I'm varying the
> dimensionality
> > of the problem, but my current scenario involves random effects vectors
> of
> > dimension ~25.  There are also approximately 15 to 40 fixed parameters (6
> of
> > which are means and standard deviations corresponding to the random
> effects
> > vectors).
> > A frequent problem I've encountered is that during estimation, ADMB often
> > elevates the magnitude of random effects estimates such that the
> objective
> > function value enters NaN territory, from which it cannot recover.  I've
> > tried using random_effects_bounded_vectors, but this frequently leads to
> > optimization failure ("hessian does not appear to be positive definite"),
> > regardless of the magnitude of the limits I impose.  I've concocted a
> > penalty function that helps alleviate this problem (most of the time):
> Prior
> > to multiplying the log-likelihood by -1, I subtract 10 times the sum of
> > squared random effects estimates.  In code, this looks like:
> > ....previous log-likelihood computations....
> > sumt=0;
> > for(i=0;i<nyears;i++){
> > sumt=sumt+t[i]*t[i];
> > }
> > totL -= sumt*10;
> > totL *= -1;
> > where t is defined as a random_effects_vector, sumt is a dvariable, and
> totL
> > is the objective function value.  Sometimes a value of 10 works, and
> > sometimes an unreasonable (but equally arbitrary) value of 100,000 is
> > necessary to obtain convergence.
> > Prior to this code, I use the usual
> > totL += -(nyears)*log(csigma)-.5*norm2(t/csigma);
> > or alternatively
> > totL  += -.5*norm2(t);
> > tt = csigma*t;
> > with appropriate definitions for the variance parameter csigma, and t and
> > tt.  I'll also note that each of the random effects occurs within either
> an
> > exponential or logistic transformation of some demographic process.
> > Of course, the higher the arbitrary scale factor (10 - 100,000), the
> greater
> > restriction I am placing on the variance parameter, csigma.  This is a
> > parameter of some interest for me, and I don't want to limit its range.
> > I'm willing to accept that some models will fail to fit, particularly
> > because many models are simplifications of the true simulation model.
>  The
> > problem is that to obtain a reasonable number of "successful"
> simulations, I
> > need to limit the failure rate of such models.
> > Does anyone have some experience with such a problem that they'd be
> willing
> > to share? How have others dealt with problems of this nature?  Is there
> some
> > customary penalty function of which I'm unaware?
> >
> >
> > Thanks very much,
> > Chris Gast
> > University of Washington
> > Quantitative Ecology and Resource Management
> >
> >
> >
> >
> >
> >
> >
> > -----------------------------
> > Chris Gast
> > cmgast at gmail.com
> >
> > _______________________________________________
> > Users mailing list
> > Users at admb-project.org
> > http://lists.admb-project.org/mailman/listinfo/users
> >
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.admb-project.org/pipermail/users/attachments/20100804/00ed22ac/attachment.html>


More information about the Users mailing list