[ADMB Users] Restricting magnitude of random effects estimates, achieving convergence of RE models
John Sibert
sibert at hawaii.edu
Tue Aug 3 19:10:10 PDT 2010
The report function is called during every phase. But you could check
last_phase() and only do you abundance reconstruction if it tis true to
assure that it only called once.
John
On 08/03/2010 01:37 PM, Chris Gast wrote:
> Thanks Mark. That's interesting that it solved the problem you were
> having--I would definitely call that unexpected behavior. I believe,
> however, that I am in this situation already--the only non-objective
> function calculations I have are an abundance reconstruction based on
> estimated parameters, but I put this in the report section in the
> interest of speed and efficiency, so it would only run once (right?).
>
>
> Chris
>
>
>
>
>
>
> -----------------------------
> Chris Gast
> cmgast at gmail.com <mailto:cmgast at gmail.com>
>
>
> On Tue, Aug 3, 2010 at 2:48 PM, Mark Maunder <mmaunder at iattc.org
> <mailto:mmaunder at iattc.org>> wrote:
>
> Chris,
>
> Not sure if this helps, but I was having the same problem with a
> model I was running for impact analysis. I had two models running
> simultaneously one with the covariates estimated and one with them
> fixed at zero to determine the impact of the covariates. Since the
> second model shared the parameters of the first model but did not
> fit to any data, it should not influence the results. I got the
> nan, so turned off the second model and it worked. Then shifted
> the second model to only get called from the report section and it
> still worked.
>
> So you might want to place any calculations that are not required
> to calculate the objective function in function called from the
> report section (and/or called only if in the sd_phase()).
>
> Hope this helps,
>
> Mark
>
> Mark Maunder
>
> Head of the Stock Assessment Program
>
> Inter-American Tropical Tuna Commission
> 8604 La Jolla Shores Drive
> La Jolla, CA, 92037-1508, USA
>
> Tel: (858) 546-7027
> Fax: (858) 546-7133
> mmaunder at iattc.org <mailto:mmaunder at iattc.org>
> http://www.fisheriesstockassessment.com/TikiWiki/tiki-index.php?page=Mark+Maunder
>
> Visit the AD Model Builder project at
> http://admb-project.org/
>
> See the following website for information on fisheries stock
> assessment
>
> http://www.fisheriesstockassessment.com/
>
> *From:* users-bounces at admb-project.org
> <mailto:users-bounces at admb-project.org>
> [mailto:users-bounces at admb-project.org
> <mailto:users-bounces at admb-project.org>] *On Behalf Of *Chris Gast
> *Sent:* Tuesday, August 03, 2010 2:28 PM
> *To:* users at admb-project.org <mailto:users at admb-project.org>
> *Subject:* [ADMB Users] Restricting magnitude of random effects
> estimates,achieving convergence of RE models
>
> Hello again,
>
> I'm simulating age-at-harvest data (and accompanying effort data)
> and trying to fit a series of 12 models, the most complex of which
> contains 3 random effects vectors (all normally-distributed). I'm
> varying the dimensionality of the problem, but my current scenario
> involves random effects vectors of dimension ~25. There are also
> approximately 15 to 40 fixed parameters (6 of which are means and
> standard deviations corresponding to the random effects vectors).
>
> A frequent problem I've encountered is that during estimation,
> ADMB often elevates the magnitude of random effects estimates such
> that the objective function value enters NaN territory, from which
> it cannot recover. I've tried using
> random_effects_bounded_vectors, but this frequently leads to
> optimization failure ("hessian does not appear to be positive
> definite"), regardless of the magnitude of the limits I impose.
> I've concocted a penalty function that helps alleviate this
> problem (most of the time): Prior to multiplying the
> log-likelihood by -1, I subtract 10 times the sum of squared
> random effects estimates. In code, this looks like:
>
> ....previous log-likelihood computations....
>
> sumt=0;
>
> for(i=0;i<nyears;i++){
>
> sumt=sumt+t[i]*t[i];
>
> }
>
> totL -= sumt*10;
>
> totL *= -1;
>
> where t is defined as a random_effects_vector, sumt is a
> dvariable, and totL is the objective function value. Sometimes a
> value of 10 works, and sometimes an unreasonable (but equally
> arbitrary) value of 100,000 is necessary to obtain convergence.
>
> Prior to this code, I use the usual
>
> totL += -(nyears)*log(csigma)-.5*norm2(t/csigma);
>
> or alternatively
>
> totL += -.5*norm2(t);
>
> tt = csigma*t;
>
> with appropriate definitions for the variance parameter csigma,
> and t and tt. I'll also note that each of the random effects
> occurs within either an exponential or logistic transformation of
> some demographic process.
>
> Of course, the higher the arbitrary scale factor (10 - 100,000),
> the greater restriction I am placing on the variance parameter,
> csigma. This is a parameter of some interest for me, and I don't
> want to limit its range.
>
> I'm willing to accept that some models will fail to fit,
> particularly because many models are simplifications of the true
> simulation model. The problem is that to obtain a reasonable
> number of "successful" simulations, I need to limit the failure
> rate of such models.
>
> Does anyone have some experience with such a problem that they'd
> be willing to share? How have others dealt with problems of this
> nature? Is there some customary penalty function of which I'm
> unaware?
>
> Thanks very much,
>
> Chris Gast
>
> University of Washington
>
> Quantitative Ecology and Resource Management
>
>
>
> -----------------------------
> Chris Gast
> cmgast at gmail.com <mailto:cmgast at gmail.com>
>
>
>
> _______________________________________________
> Users mailing list
> Users at admb-project.org
> http://lists.admb-project.org/mailman/listinfo/users
>
--
John Sibert
Emeritus Researcher, SOEST
University of Hawaii at Manoa
Visit the ADMB project http://admb-project.org/
More information about the Users
mailing list