[ADMB Users] ADMB and memory issues

John Sibert sibert at hawaii.edu
Thu Aug 26 10:33:38 PDT 2010


Mark,
You pose very difficult questions to answer a priori. I am familiar with 
two models that integrate spatial PDEs. In one model, ADBM is used; in 
the other model (SEAPODDYM), only the AUTODIF library is used. These 
models take advantage some ADMB features that help to make the 
computation feasible. I'll mention some below.

The first example is a model of tagged tuna populations. It is a time 
dependent solution on a 0.5 degree resolution on a 160x70 model domain. 
Using ragged arrays to avoid allocating memory for continental land 
masses, the number of compurational elements is 10003. The number of 
parameters estimated ranges between 20 and 100 depending on model 
structure. Each evaluation of the likelihood function  requires about 
120 solutions of the PDE and uses about 6.5 seconds on a linux 64-bit 
core duo laptop. All of the ADMB gradient information is save in memory, 
without writing to the disk. Since the cohorts of tags at liberty do not 
depend on one-another, it is possible to compute the likelihood 
contribution of each cohort separately using "the funnel" to help 
minimize the volume of stored gradient information.

The SEAPODYM model is similar but is evaluated on a much larger (entire 
Pacific Ocean) model domain and estimates fewer parameters. The authors 
of the model have reduced the gradient information stored by writing 
adjoint code for all of the model and saving it in a way that conforms 
to ADMB protocols without using specific ADMB types.

The most useful tool for optimizing ADMB code is possibly a profiler 
(gprof) which identifies the components of the program that are taking 
the most machine cycles. These "time hogs" then become candidates for 
developing adjoint code.

So I would not be intimidated by integrating a PDE 1000 times. Is it 
possible to pose the model in a way that the integrations are 
independent of one another so that you can use the funnel? How many 
dimensions are involved in the PDE?

Hope this helps,
John


On 08/25/2010 12:01 PM, Mark Payne wrote:
> Dear ADMBers,
>
> I am just about to embark on an optimisation problem that involves fitting the parameters of a PDE to a fairly large data set - in the absence of an analytical solution, I need to integrate the PDE numerically to predict each data point. Unfortunately, due to the structure of the problem, I have to integrate the same PDE again slightly differently for each of the 1000 data points I have.
>
> As you can probably see, this gets to be "operation" intensive, requiring many calculations to get to the final likelihood that I will be optimising. I would dearly like to do this in ADMB, but I'm very concerned that the memory issues of keeping track of so many operations and their derivatives may simply blow up in my face and it become impossible to deal with on any mortal machine.
>
> I was therefore wondering if I could get some advice about how I should start to plan this? This is there any way to try and make an a priori guess as to how much memory I'm going to need? How can I keep track of the memory from the software as I add components? Can I do a "memory profiling" exercise or similar, so that I can see which parts of system are the most expensive? Are there any tricks that I can do to reduce the memory requirements? Are there any good resources that deal with these issues?
>
> I'm looking forward to hearing your opinions and ideas.
>
> Cheers,
>
> Mark
> _______________________________________________
> Users mailing list
> Users at admb-project.org
> http://lists.admb-project.org/mailman/listinfo/users
>
>    

-- 
John Sibert
Emeritus Researcher, SOEST
University of Hawaii at Manoa

Visit the ADMB project http://admb-project.org/




More information about the Users mailing list