[Developers] FW: parallel processing

Jim Ianelli Jim.Ianelli at noaa.gov
Wed Jul 6 11:05:42 PDT 2011

I don't have a good feeling for what's best here but from some modest 
searching online, it seems MPI might be most useful since (I think) they 
have some facility to access the GPU (which may be much faster than 
accessing CPUs, especially over networks).  One example software package 
that uses the GPU directly is shown on 

Apologies if GPUs have been part of this discussion already and I've 
missed it.


On 7/6/2011 10:48 AM, John Sibert wrote:
> Sorry to be slow in responding to Hans' questions, but here is a 
> response.
> 1. What kind of parallel architechture do we aim at:
>    a) Multicore (shared memory) processors. Every machine has got this 
> today so there is a large "market".
>    b) Clusters connected via a network.
> We probably need to do both. Multicore machines are indeed ubiquitous. 
> Clusters have also become common in  scientific computing centers.
> Does anyone know how the -j option in gnu make was programmed? make 
> -j4 runs 4 processes (compiles 4 source modules at the same time) on 
> my dual core laptop. It is probably an easy to parallelize compilation 
> of source code, but it might be worth a glance at the source code. 
> Since it is gnu, I guess the source code is available. I'll look.
> Openmpi seems aimed at implementation on clusters, but also seems to 
> work on multicore machines. mpirun -np 2   seems to run on my laptop, 
> but I cannot really figure out if it is doing what I expect. Openmpi 
> appears to have some nice features, but I'm having a lot of trouble 
> penetrating the documentation. It is really hard to find what is what. 
> Is MPI:: a class or a namespace? Somtimes it is written MPI_? Where is 
> the documentation? Is it possible to implement both cluster-based 
> parallelization  and shared memory multi-threading in openmpi?
> This is from the openmpi FAQ:
>> 7. Is Open MPI thread safe?
>> Support for MPI_THREAD_MULTIPLE (i.e., multiple threads executing 
>> within the MPI library) and asynchronous message passing progress 
>> (i.e., continuing message passing operations even while no user 
>> threads are in the MPI library) has been designed into Open MPI from 
>> its first planning meetings.
>> Support for MPI_THREAD_MULTIPLE is included in the first version of 
>> Open MPI, but it is only lightly tested and likely still has some 
>> bugs. Support for asynchronous progress is included in the TCP 
>> point-to-point device, but it, too, has only had light testing and 
>> likely still has bugs.
>> Completing the testing for full support of MPI_THREAD_MULTIPLE and 
>> asynchronous progress is planned in the near future.
> 2. Choice parallelization approach pthreads, mpi or openmp.
> It is too soon to say. Dave has made some progress with openmpi, so 
> maybe the question is answered?
> Cheers,
> John
> On 07/01/2011 04:56 AM, Hans Julius Skaug wrote:
>> ADMB developers,
>> Following the session on parallelization in Santa Barbara,
>> Dave has made a few suggestions: "thread local storage is the concept 
>> we were missing",
>> and the suggestion to use MPI (see message below). I want
>> to bring this discussion into plenary because the choice of strategy 
>> is important.
>> Questions:
>> 1. What kind of parallel architechture do we aim at:
>>     a) Multicore (shared memory) processors. Every machine has got 
>> this today so there is a large "market".
>>     b) Clusters connected via a network. This is what Dave mentions.
>> I had thought of only a). If b) is achievable for free one should 
>> clearly say "yes",
>> but the question is whether one should let b) dictate the choice of 
>> parallelization approach.
>> 2. Choice parallelization approach pthreads, mpi or openmp.
>>     I have only investigated openmp. If applicable it requires very 
>> little change to existing
>>     code, which in my mind is important. The choice we make 
>> (pthreads, mpi or openmp) should:
>>     - satisfy all our needs
>>     - last 10 years as a technology
>> Dave: do you know if OpenMP used together with "thread local storage" 
>> will do what we want?
>> Developers: 1) is a question that we all should have an opinion about.
>> Hans
>> __________
>> -----Original Message-----
>> From: developers-bounces at admb-project.org 
>> [mailto:developers-bounces at admb-project.org] On Behalf Of dave fournier
>> Sent: Thursday, June 30, 2011 4:05 PM
>> To: developers at admb-project.org
>> Subject: [Developers] parallel processing
>> I think that we should consider open mpi  for parallel processing.  This
>> is on the
>> process level rather than the thread level. It seems to support the
>> master -slave model
>> which is what I used with pvm. In  this example  the master creates a
>> number of slaves.
>> This can be done on one machine or also over a network.
>> After installin open mpi this compiles with
>>    mpicxx manager.c -o manager
>>    mpicxx worker.c -o worker
>> and runs with
>>    mpirun -np 1   ./manager
>> _______________________________________________
>> Developers mailing list
>> Developers at admb-project.org
>> http://lists.admb-project.org/mailman/listinfo/developers

More information about the Developers mailing list