[ADMB Users] non-convergence and "inner maxg = 0"

dave fournier otter at otter-rsch.com
Sat Mar 20 01:27:37 PDT 2010



>I think Dave probably means his trick where you assume that th>e random
>effect is N(0,1) and when you use it you multiply it by the sd

>Pred=mu+y*sd

>g=0.5*norm2(y)

It is not really a trick. Just a different parameterization of the
problem. The point is that when sd-->0 it acts nicely. So you cant start
the model off with a small value of sd such as .05 or .01 which is
important for some more numerically difficult problems. Also you should
use the concentrated likelihood as in

init_bounded_number u(-0.5,1.5)

 g = 0.5*norm2(mu_r);
 dvariable r2=norm2(y - Ey);
 dvariable vhat=r2*u/nobs;
 g += 0.5*nobs*log(vhat) + 0.5*r2/vhat;

This replaces the numerically more difficult log_sigma
with the dimensionless quantity u whose value is known to be 1.

Also for comparison purposes you should add the  log(sqrt(2*pi)) terms
to the -log-likelihood.

Finally don't tell us how good ADMB is for some problems. Tell the
ordinary users on the R list. We already know and they will never hear
it from the "experts" there.








More information about the Users mailing list