[ADMB Users] glmm.admb in R

Chris Gast cmgast at gmail.com
Thu Feb 17 08:14:41 PST 2011


Until the problem is actually solved, I can provide you with my technique as
a workaround for running numerous simulations.

I run two R processes simultaneously.  The first one fits models in ADMB via
the shell() command contained a loop that counts number of successful
simulation runs.  As a part of this first process, I use ">" at the command
line to dump output to a text file.  My separate process repeatedly (every 5
seconds, in my case) checks the file size of this text file to see if it
gets "too large" (determined by trial-and-error).  If the file does exceed
some threshold, I again use a shell() command to kill the running process.
 In my case (windows OS), this looks like

shell("taskkill /F /IM Sim1*")

because I am fitting many models that all start with "Sim1XXXXXXX".  When
the first R process running the model fit completes (or fails, when the
second process kills it), I have it check to see if the file size of any of
the .rep, .par, and .std files is less than 100 bytes, which signals a
failure (an empty file).  The loop in the first process then tries a second
parameter start value guess on the same data.  If that one fails, the first
process moves on to the next dataset, and considers the previous dataset a
"failure."

It might sound a bit complicated and it's not perfect, but it gets me
through thousands of simulations successfully. I'm happy to provide more
detail if you're interested.

I'll note that my infinite-loop problem is the "innermaxg=0" problem
(discussed here before), wherein this message is repeated at a very high
rate, so that the temporary file size accumulates quickly.  I don't know if
this is the case with the error you're receiving.  I'm also not running my
simulations through glmm.admb, so some modifications would be necessary to
add this functionality to your situation, such as encapsulating some R
terminal output in sink() functions to create the necessary temporary file.


Chris




-----------------------------
Chris Gast
cmgast at gmail.com


On Wed, Feb 16, 2011 at 1:21 AM, Zhanpan Zhang
<zhanpan.zhang at email.ucr.edu>wrote:

> Hi all,
>
> Thanks for posting my previous email in this list, Prof. Skaug!
>
> For the data I previously mentioned, R keeps displaying the error message
> "Error matrix not positive definite in choleski_decomp" and it keeps running
> and won't stop (R code is shown below, and the data has only two columns of
> "y" being the response and "subject" being the random group factor).
>
> >
> problem=data.class(try(glmm.admb(y~1,random=~1,group="subject",data=data,family="nbinom"),silent=T))=="try-error"
>
> This has been solved by adding the option "easyFlag=FALSE". However, I
> simulated some other data, and the same problem of "Error matrix not
> positive definite in choleski_decomp" endlessly popping out came back again
> even with "easyFlag=FLASE".
>
> I am actually applying glmm.admb to a number of the simulated data (say
> 500), and I could accept that some of them would not be fitted very well.
> But is it possible to let R jump to the next data fitting if the above
> problem happens for the current data set? (the only thing I can do currently
> is to force R to stop, so I could never finish this loop of fitting 500 data
> sets.)
>
> Any comments will be appreciated!
>
>
> Zhanpan
>
>
>
> _______________________________________________
> Users mailing list
> Users at admb-project.org
> http://lists.admb-project.org/mailman/listinfo/users
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.admb-project.org/pipermail/users/attachments/20110217/85f09279/attachment.html>


More information about the Users mailing list