[Developers] palallel processing with MPI
dave fournier
davef at otter-rsch.com
Sun Jul 3 10:16:31 PDT 2011
On 11-07-03 09:58 AM, Derek Seiple wrote:
It works for me with the catage example. Look for the macro USE_ADMPI
which includes the stuff. The MPI code is controlled by mpi_master
To run it
./catage -master -nslaves 5
will do the trick with 5 slaves. There are a few funny things about
communication
between master and slaves at the beginning, but I think the code is just
taking
advantage of a newbe. To debug I have the master read in ad[0] and pass it
to the slaves. n linux at this point the slaves are waiting and you can
attach to
a slave process
sudo ddd catage process-id
I need the sudo. some kind of permission problem.
In void admpi_manager::send_int_to_slave(int i,int _slave_number)
at the beginning I seemed to need to wait with
// this wait should not be needed
// ***************************************************
MPI_Wait(&(global_request[mpi_offset]), &myStatus);
// ****************************************************
But now that I know more it works without it.
void admpi_manager::send_int_to_slave(int i,int _slave_number)
{
int slave_number=_slave_number-1;
MPI_Status myStatus;
MPI_Request request;
/* make sure that the previous read using this memory area
has completed
*/
if(global_request[mpi_offset]) /* checks to see if has been used at
least */
/* once */
{
MPI_Wait(&(global_request[mpi_offset]), &myStatus);
}
mpi_int[mpi_offset]=i;
MPI_Isend(&(mpi_int[mpi_offset]),1,MPI_INT,slave_number,0,
everyone,&(global_request[mpi_offset]));
//sleep(1);
// this wait should not be needed
// ***************************************************
MPI_Wait(&(global_request[mpi_offset]), &myStatus);
// ****************************************************
increment_mpi_offset();
}
> Good work Dave. I be able to look into this in more detail this coming week.
>
> Derek
>
> On Sun, Jul 3, 2011 at 8:55 AM, dave fournier<davef at otter-rsch.com> wrote:
>> Hi lazy people!
>>
>> I have implemented parallel computation of the Hessian using open mpi.
>>
>> I did it with my old code base because I have it set up better for
>> debugging.
>> There are only a small number of file to change however. I attached a zip.
>>
>> The slaves write their part of the Hessian into their own temporary files
>> with _slavenumber appended. This will have to be done for all the temporary
>> files
>> since in the simple implementation all processes are in the same directory.
>>
>> The main work is to write all the functions to pass autodif types back and
>> forth.
>> I have made a beginning.
>>
>> Onced you install openmpi the makefile needs to be modified to include
>> the write compile flags for openmpi. see the openmpi docs concerning
>>
>> the mpicxx -showme:compile and -showme:link options
>>
>> By extending this stuff a bit we shold be able to parallelize problems like
>> Mollie's pond model
>> to speed it up a lot (hopefully).
>>
>> Feedback would be welcome. I'm off hiking for 4 days tomorrow.
>>
>> Dave
>>
>>
>> _______________________________________________
>> Developers mailing list
>> Developers at admb-project.org
>> http://lists.admb-project.org/mailman/listinfo/developers
>>
>>
More information about the Developers
mailing list