[Developers] trying to compare autodif with cppad

dave fournier davef at otter-rsch.com
Sat Aug 16 05:43:40 PDT 2014


On 08/15/2014 11:46 PM, Kasper Kristensen wrote:

Hi Kasper,

Thanks for the input.  It is strange that the overflow causes NANS in 
the gradient since the det is not used, only the ln_det.
However it is easy to remove. I'm not sure to what extent the absense of 
adjoint code is an issue or if anybody cares.
Back in the day I recall a potential customer telling me that Griewank 
had already tried to do what I proposed and it did not
work. He was referring to the lack of adjoint code producing code that 
was too slow.

             Dave



> Hi Dave,
>
> I finally got some time to try your test case.
>
> - cppad version: Avoid creating a tape with numerical overflows by inserting valid parameters for the tape creation. Or easier: remove the line "det = Scalar( signdet ) * exp( logdet );" to get rid of the overflows (NaNs in the gradient).
> - The cppad version runs out of memory because this example tape on the order of n^3 floating point operations. The admb version avoids this by using adjoint code I believe. Similar tricks would have to be applied to cppad in order to make this example run for large n (you can hand code your own derivatives using cppad's concept of 'atomic functions').
> - It would be interesting to compare just the sweep performance for the two tools, that is just the lines "dw=f.Reverse(1,w);" and "gradcalc(n*n,g);".
>
> Kasper
>
>
> ________________________________________
> From: dave fournier [davef at otter-rsch.com]
> Sent: Wednesday, August 13, 2014 3:57 PM
> To: Kasper Kristensen
> Cc: developers at admb-project.org
> Subject: Re: [Developers] trying to compare autodif with cppad
>
> On 08/12/2014 10:01 PM, Kasper Kristensen wrote:
>
> Sorry about forgetting the hpp file. It is now attached.  CPPAD version
> is now much faster
> with the -DNDEBUG option.  However when I increase the matrix size to
> 500x500  (I'm aiming for fast 2,000x2,000)
>    the cppad version produces NANS. Also note that the autodif version
> produces the numbers and stores them
> in a file named vector for the cppad version.
>
>         Dave
>
>
>
>> Dave,
>>
>> I could not run your test because "myldet.hpp" was not attached.
>> Did you try set the "-DNDEBUG" flag with the cppad compilation? If I recall correctly this could make a big difference.
>>
>> Kasper
>>
>>
>>
>> ________________________________________
>> From: developers-bounces at admb-project.org [developers-bounces at admb-project.org] on behalf of dave fournier [davef at otter-rsch.com]
>> Sent: Wednesday, August 13, 2014 5:26 AM
>> To: developers at admb-project.org
>> Subject: [Developers] trying to compare autodif with cppad
>>
>>       There has been a lot of material about TMB lately.  I think that TMB
>> uses cppad as its underlying AD engine.   I am interested in
>> trying to understand if cppad is superior to autodif and if so whether
>> ADMB could be modified to use cppad.
>>
>> As a first attempt I have been working at reproducing the LU
>> decomposition to calculate the log of
>> (the absolutevalue of ) the determinant of a matrix.  The code is
>> attached.  myreverse.cpp calculates the log det and
>> the gradient via reverse model AD using cppad.  myreverse_admb.cpp does
>> the same thing using autodif.
>>
>> For a 300x300 matrix the time required for these calculations is
>> approximately  .25 seconds for autodif and 19 seconds for cppad so that
>> autodif is about 75 times faster.  Obviously there may be techniques
>> which can speed up cppad or I may have made
>> some beginners error.  Perhaps the experts among us could comment.
>>
>> I could not compare matrices larger than 300x300 because the cppad code
>> crashed.  The autodif version
>> could do a 500x500 matrix in 1.23 seconds and a 1000x1000 matrix in 11
>> seconds.
>>
>>
>



More information about the Developers mailing list