[ADMB Users] Does CUDA suck? answer NO!

CHRIS GRANDIN cgrandin at shaw.ca
Wed Sep 14 10:17:01 PDT 2011


Dave - sounds good, here is what OpenCL matmult looks like, very similar to cuda but slightly different syntax.  OpenCL takes care of the sub-parallelized jobs that are spawned from each multiprocessor on your card, with CUDA you need to define these, hence the __kernel vs __global__ function types.  

Also, watch your output precision, more than likely your graphics card does not support double precision operations, and CUDA will silently cast all your doubles to floats.  Another issue which will be resolved with time and hardware upgrades!

Chris

///////////////////////////////////////////////////////////////////////////////
//! Matrix multiplication on the device: C = A * B
//! wA is A's width and wB is B's width
////////////////////////////////////////////////////////////////////////////////
__kernel void
matrixMul( __global float* C, 
                     __global float* A, 
                     __global float* B, 
                     int wA, 
                     int wB){
    
    // 2D Thread ID
    int tx = get_local_id(0);
    int ty = get_local_id(1);
    
    // value stores the element 
    // that is computed by the thread
    float value = 0;
    for (int k = 0; k < wA; ++k){
      float elementA = A[ty * wA + k];
      float elementB = B[k * wB + tx];
      value += elementA * elementB;
    }
    
    // Write the matrix to device memory each 
    // thread writes one element
    C[ty * wA + tx] = value;
}


----- Original Message -----
From: dave fournier <davef at otter-rsch.com>
Date: Wednesday, September 14, 2011 10:02 am
Subject: Re: [ADMB Users] Does CUDA suck?  answer NO!
To: CHRIS GRANDIN <cgrandin at shaw.ca>
Cc: users at admb-project.org

> On 11-09-14 08:40 AM, CHRIS GRANDIN wrote:
> 
> The main point of that exercise was simply to check the 
> performance of gpu stuff.
> It appears that opencl may be the future, but the interface 
> sucks big time compared to cuda.
> Since my expertise is in the derivative stuff and since 
> conversion from cuda to opencl is
> well understood  (just don't use any c++) in the kernel 
> code, I decided to implement
> a gpu version of the vectorized negative binomial density for 
> dvar_vectors in cuda as an
> example. With luck it will be finished today.
> 
>         Dave
> 
> 
> 
> 
> >Dave, I am wondering why you didn't use OpenCL library like I 
> did in my matrix mult example at the workshop?  If you do 
> there is no requirement for a special compiler (nvcc) and extra 
> makefiles, and the code is already optimized.
> >
> >Yes, the limiting factor is the bussing of data to/from the GPU 
> and for addition it outweighs the cost of the addition 
> operations. Its the same for OpenCL, that's why I did the matrix 
> mult example..
> >
> >Also I don't see how you are carrying the derivative 
> information around, that was my issue thus far, CUDA and OpenCL 
> don't support C++ classes yet!  Please let me know what you 
> think of this as this parallelization has been of ongoing 
> interest to me.
> >
> >Thanks,
> >Chris
> >
> >----- Original Message -----
> >From: dave fournier <davef at otter-rsch.com>
> >Date: Saturday, September 3, 2011 4:05 pm
> >Subject: Re: [ADMB Users] Does CUDA suck?  answer NO!
> >To: users at admb-project.org
> >
> >> First there is an error in the code. It should read
> >>
> >>
> >> return z;
> >>
> >>  and not
> >>
> >>           
> return x+y;
> >>
> >> However I thought that maybe the problem is that  addition
> >> is too trivial compared to the
> >> overhead of moving things to the GPU and back. I changed the
> >> function to pow(x,y)
> >> and lo!  the el cheapo GPU is faster (about 6 times faster).
> >> So how hard is a vector pow.  All that was necessary was to
> >> take the included VecAdd
> >> function and modify it to
> >>
> >>
> >> __global__ void VecPow(const double* A, const double* B, double*
> >> C, int N)
> >> {
> >>     int i = blockDim.x * blockIdx.x + 
> threadIdx.x;>>     double x=0.0;
> >>     if (i < N)
> >>     {
> >>         C[i] = 
> pow(A[i],B[i]);>>     }
> >> }
> >>
> >> Code is attached. Note I use mypow just to avoid clash with
> >> existing admb libs.
> >>
> >> 
> 
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.admb-project.org/pipermail/users/attachments/20110914/afbf4f88/attachment.html>


More information about the Users mailing list