Thanks, Hans, for your ideas. I had a considered such a logistic transformation, but hoped there might be a different solution. I'll give that a try, and report back.<div><br></div><div>I do bound variance parameters, but when my random effect estimates become very large, so do my sigma parameters---they end up bouncing off the upper boundary (regardless of magnitude), and non-convergence is then observed. </div>
<div><br></div><div>Fading-out the penalty function in phases is, unfortunately, not really an option, because it is the final phase (the SD parameter and RE estimation phase) during which the problems occur.</div><div><br>
</div><div>I suppose I'm a bit confused about the -noinit option. Shouldn't ADMB use the previous RE estimates as the starting point for the next optimization by default? Perhaps I'm misunderstanding something.</div>
<div><br></div><div>Thanks again,</div><div><br></div><div>Chris</div><div><br></div><div><br></div><div><br></div><div><br>-----------------------------<br>Chris Gast<br><a href="mailto:cmgast@gmail.com">cmgast@gmail.com</a><br>
<br><br><div class="gmail_quote">On Tue, Aug 3, 2010 at 11:09 PM, H. Skaug <span dir="ltr"><<a href="mailto:hskaug@gmail.com">hskaug@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Chris,<br>
<br>
Thanks for raising a point of general interest.<br>
<br>
random_effects_bounded_vectors are not working. That is the reason<br>
it is not mentioned in the RE manual (I hope). I agree that it would be<br>
useful to have. In its absense we have to use more ad hoc techniques.<br>
<br>
One possibility is to obtain boundedness via transformation<br>
t_b = a + b*exp(t)/(1+exp(t)) and to chose a and b properly.<br>
<br>
Personally I prefer to put bounds on the variance parameters (csigma)<br>
rather than using the penalties. The latter destorts the interpretation<br>
of the random effects.<br>
<br>
Phases could be useful here. Your penalty could "fade out" with increasing<br>
phase.<br>
<br>
A separate thing that may help is the "-noinit" command line option<br>
which makes admb use the previous RE estimate the starting point<br>
for the next optimization.<br>
<br>
Hans<br>
<div><div></div><div class="h5"><br>
<br>
<br>
<br>
On Tue, Aug 3, 2010 at 11:27 PM, Chris Gast <<a href="mailto:cmgast@gmail.com">cmgast@gmail.com</a>> wrote:<br>
> Hello again,<br>
> I'm simulating age-at-harvest data (and accompanying effort data) and trying<br>
> to fit a series of 12 models, the most complex of which contains 3 random<br>
> effects vectors (all normally-distributed). I'm varying the dimensionality<br>
> of the problem, but my current scenario involves random effects vectors of<br>
> dimension ~25. There are also approximately 15 to 40 fixed parameters (6 of<br>
> which are means and standard deviations corresponding to the random effects<br>
> vectors).<br>
> A frequent problem I've encountered is that during estimation, ADMB often<br>
> elevates the magnitude of random effects estimates such that the objective<br>
> function value enters NaN territory, from which it cannot recover. I've<br>
> tried using random_effects_bounded_vectors, but this frequently leads to<br>
> optimization failure ("hessian does not appear to be positive definite"),<br>
> regardless of the magnitude of the limits I impose. I've concocted a<br>
> penalty function that helps alleviate this problem (most of the time): Prior<br>
> to multiplying the log-likelihood by -1, I subtract 10 times the sum of<br>
> squared random effects estimates. In code, this looks like:<br>
> ....previous log-likelihood computations....<br>
> sumt=0;<br>
> for(i=0;i<nyears;i++){<br>
> sumt=sumt+t[i]*t[i];<br>
> }<br>
> totL -= sumt*10;<br>
> totL *= -1;<br>
> where t is defined as a random_effects_vector, sumt is a dvariable, and totL<br>
> is the objective function value. Sometimes a value of 10 works, and<br>
> sometimes an unreasonable (but equally arbitrary) value of 100,000 is<br>
> necessary to obtain convergence.<br>
> Prior to this code, I use the usual<br>
> totL += -(nyears)*log(csigma)-.5*norm2(t/csigma);<br>
> or alternatively<br>
> totL += -.5*norm2(t);<br>
> tt = csigma*t;<br>
> with appropriate definitions for the variance parameter csigma, and t and<br>
> tt. I'll also note that each of the random effects occurs within either an<br>
> exponential or logistic transformation of some demographic process.<br>
> Of course, the higher the arbitrary scale factor (10 - 100,000), the greater<br>
> restriction I am placing on the variance parameter, csigma. This is a<br>
> parameter of some interest for me, and I don't want to limit its range.<br>
> I'm willing to accept that some models will fail to fit, particularly<br>
> because many models are simplifications of the true simulation model. The<br>
> problem is that to obtain a reasonable number of "successful" simulations, I<br>
> need to limit the failure rate of such models.<br>
> Does anyone have some experience with such a problem that they'd be willing<br>
> to share? How have others dealt with problems of this nature? Is there some<br>
> customary penalty function of which I'm unaware?<br>
><br>
><br>
> Thanks very much,<br>
> Chris Gast<br>
> University of Washington<br>
> Quantitative Ecology and Resource Management<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> -----------------------------<br>
> Chris Gast<br>
> <a href="mailto:cmgast@gmail.com">cmgast@gmail.com</a><br>
><br>
</div></div><div><div></div><div class="h5">> _______________________________________________<br>
> Users mailing list<br>
> <a href="mailto:Users@admb-project.org">Users@admb-project.org</a><br>
> <a href="http://lists.admb-project.org/mailman/listinfo/users" target="_blank">http://lists.admb-project.org/mailman/listinfo/users</a><br>
><br>
><br>
</div></div></blockquote></div><br></div>