[ADMB Users] Restricting magnitude of random effects estimates, achieving convergence of RE models

Mark Maunder mmaunder at iattc.org
Wed Aug 4 09:32:22 PDT 2010


Hans,

Does using -noinit speed up the convergence?

Thanks,

Mark


 
Mark Maunder 
Head of the Stock Assessment Program
Inter-American  Tropical Tuna Commission
8604 La Jolla Shores Drive
La Jolla, CA, 92037-1508, USA
  
Tel: (858) 546-7027
Fax: (858) 546-7133
mmaunder at iattc.org
http://www.fisheriesstockassessment.com/TikiWiki/tiki-index.php?page=Mark+Maunder
 
Visit the AD Model Builder project at
 http://admb-project.org/
 
See the following website for information on fisheries stock assessment
http://www.fisheriesstockassessment.com/
 

-----Original Message-----
From: users-bounces at admb-project.org [mailto:users-bounces at admb-project.org] On Behalf Of H. Skaug
Sent: Tuesday, August 03, 2010 11:10 PM
To: Chris Gast
Cc: users at admb-project.org
Subject: Re: [ADMB Users] Restricting magnitude of random effects estimates,achieving convergence of RE models

Chris,

Thanks for raising a point of general interest.

random_effects_bounded_vectors are not working. That is the reason
it is not mentioned in the RE manual (I hope). I agree that it would be
useful to have. In its absense we have to use more ad hoc techniques.

One possibility is to obtain boundedness via transformation
t_b = a + b*exp(t)/(1+exp(t)) and to chose a and b properly.

Personally I prefer to put bounds on the variance parameters (csigma)
rather than using the penalties. The latter destorts the interpretation
of the random effects.

Phases could be useful here. Your penalty could "fade out" with increasing
phase.

A separate thing that may help is the "-noinit" command line option
which makes admb use the previous RE estimate the starting point
for the next optimization.

Hans




On Tue, Aug 3, 2010 at 11:27 PM, Chris Gast <cmgast at gmail.com> wrote:
> Hello again,
> I'm simulating age-at-harvest data (and accompanying effort data) and trying
> to fit a series of 12 models, the most complex of which contains 3 random
> effects vectors (all normally-distributed).  I'm varying the dimensionality
> of the problem, but my current scenario involves random effects vectors of
> dimension ~25.  There are also approximately 15 to 40 fixed parameters (6 of
> which are means and standard deviations corresponding to the random effects
> vectors).
> A frequent problem I've encountered is that during estimation, ADMB often
> elevates the magnitude of random effects estimates such that the objective
> function value enters NaN territory, from which it cannot recover.  I've
> tried using random_effects_bounded_vectors, but this frequently leads to
> optimization failure ("hessian does not appear to be positive definite"),
> regardless of the magnitude of the limits I impose.  I've concocted a
> penalty function that helps alleviate this problem (most of the time): Prior
> to multiplying the log-likelihood by -1, I subtract 10 times the sum of
> squared random effects estimates.  In code, this looks like:
> ....previous log-likelihood computations....
> sumt=0;
> for(i=0;i<nyears;i++){
> sumt=sumt+t[i]*t[i];
> }
> totL -= sumt*10;
> totL *= -1;
> where t is defined as a random_effects_vector, sumt is a dvariable, and totL
> is the objective function value.  Sometimes a value of 10 works, and
> sometimes an unreasonable (but equally arbitrary) value of 100,000 is
> necessary to obtain convergence.
> Prior to this code, I use the usual
> totL += -(nyears)*log(csigma)-.5*norm2(t/csigma);
> or alternatively
> totL  += -.5*norm2(t);
> tt = csigma*t;
> with appropriate definitions for the variance parameter csigma, and t and
> tt.  I'll also note that each of the random effects occurs within either an
> exponential or logistic transformation of some demographic process.
> Of course, the higher the arbitrary scale factor (10 - 100,000), the greater
> restriction I am placing on the variance parameter, csigma.  This is a
> parameter of some interest for me, and I don't want to limit its range.
> I'm willing to accept that some models will fail to fit, particularly
> because many models are simplifications of the true simulation model.  The
> problem is that to obtain a reasonable number of "successful" simulations, I
> need to limit the failure rate of such models.
> Does anyone have some experience with such a problem that they'd be willing
> to share? How have others dealt with problems of this nature?  Is there some
> customary penalty function of which I'm unaware?
>
>
> Thanks very much,
> Chris Gast
> University of Washington
> Quantitative Ecology and Resource Management
>
>
>
>
>
>
>
> -----------------------------
> Chris Gast
> cmgast at gmail.com
>
> _______________________________________________
> Users mailing list
> Users at admb-project.org
> http://lists.admb-project.org/mailman/listinfo/users
>
>
_______________________________________________
Users mailing list
Users at admb-project.org
http://lists.admb-project.org/mailman/listinfo/users



More information about the Users mailing list