[Developers] FW: parallel processing
dave fournier
davef at otter-rsch.com
Fri Jul 1 09:52:46 PDT 2011
On 11-07-01 07:56 AM, Hans Julius Skaug wrote:
I thinkone has to try this stuff out a bit before forming an opinion.
MPI look pretty efficient. as a result it is also a bit scary. for
speed one wants to use
asynchronous message passing. this means that say the master process
does not wait for the slave process to read a dvector that it has sent
to the slave. MPI does not copy the vector it simply keeps
the pointers (addresses) of the relevant data. Consider
the folowing pseudo code for a send from master to a slave
{
dvector x(1,25)
// stuff
int mmin=x.indexmin(); // send lower bound to slave 3
int mmax=x.indexmmax(); // send lower bound to slave 3
send_to slave(&mmin,3); // send lower bound to slave 3
send_to slave(&mmax,3); // send upper bound to slave 3
send_to_slave(&(x[mmin]) ,3) // send point to beginning of vectors guts
}
once one leaves the block of code the pointers &mmin and &mmax point to
garbage.
So one need to creeate a block on the heap to store the values. One can
cycle throygbh the blcok but then one needs to check that the data have
been read before overwriting it. MPI provides
the function MPI_Wait to do that. Of course the vector could get
ovewritten as well,
but one would notice that hopefully. Anyway one needs to be careful.
check out send_dvector_to_slave to see how it works. Once this is all
encapsulated into a class
it should get easier.
> ADMB developers,
>
> Following the session on parallelization in Santa Barbara,
> Dave has made a few suggestions: "thread local storage is the concept we were missing",
> and the suggestion to use MPI (see message below). I want
> to bring this discussion into plenary because the choice of strategy is important.
>
> Questions:
>
> 1. What kind of parallel architechture do we aim at:
> a) Multicore (shared memory) processors. Every machine has got this today so there is a large "market".
> b) Clusters connected via a network. This is what Dave mentions.
>
> I had thought of only a). If b) is achievable for free one should clearly say "yes",
> but the question is whether one should let b) dictate the choice of parallelization approach.
>
> 2. Choice parallelization approach pthreads, mpi or openmp.
> I have only investigated openmp. If applicable it requires very little change to existing
> code, which in my mind is important. The choice we make (pthreads, mpi or openmp) should:
> - satisfy all our needs
> - last 10 years as a technology
>
> Dave: do you know if OpenMP used together with "thread local storage" will do what we want?
>
> Developers: 1) is a question that we all should have an opinion about.
>
> Hans
>
>
>
>
>
>
>
> __________
>
>
>
> -----Original Message-----
> From: developers-bounces at admb-project.org [mailto:developers-bounces at admb-project.org] On Behalf Of dave fournier
> Sent: Thursday, June 30, 2011 4:05 PM
> To: developers at admb-project.org
> Subject: [Developers] parallel processing
>
> I think that we should consider open mpi for parallel processing. This
> is on the
> process level rather than the thread level. It seems to support the
> master -slave model
> which is what I used with pvm. In this example the master creates a
> number of slaves.
> This can be done on one machine or also over a network.
>
> After installin open mpi this compiles with
>
> mpicxx manager.c -o manager
>
> mpicxx worker.c -o worker
>
> and runs with
>
> mpirun -np 1 ./manager
>
>
>
>
>
>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: manager.cpp
Type: text/x-c++src
Size: 5588 bytes
Desc: not available
URL: <http://lists.admb-project.org/pipermail/developers/attachments/20110701/99be6fe0/attachment.cpp>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: worker.cpp
Type: text/x-c++src
Size: 2868 bytes
Desc: not available
URL: <http://lists.admb-project.org/pipermail/developers/attachments/20110701/99be6fe0/attachment-0001.cpp>
More information about the Developers
mailing list