Unfortunately I don't have time today to play with this. For the record, my graphics card is NVIDIA Quadro FX 3800.<div><br></div><div>After a number of amendments to the newfmin.cpp file based on Dave's suggestions, it occurred to me that we have a "branches" directory in the SVN repository to keep track of such changes. </div>
<div><br></div><div>There was an old gpu folder in there, which I don't know anything about. So rather than replace the file in the src/linad99, I just put the file in the main directory: /branches/gpu/newfmin.cpp.</div>
<div>Here's a link to the modified file in case anyone else wants to try it: <a href="http://admb-project.org/redmine/projects/issues/repository/entry/branches/gpu/newfmin.cpp" target="_blank">http://admb-project.org/redmine/projects/issues/repository/entry/branches/gpu/newfmin.cpp</a></div>
<div><br></div><div>-Ian<br><br><div class="gmail_quote">On Wed, May 16, 2012 at 6:27 AM, dave fournier <span dir="ltr"><<a href="mailto:davef@otter-rsch.com" target="_blank">davef@otter-rsch.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000"><div>
On 12-05-15 02:59 PM, Ian Taylor wrote:<br>
<br></div>
I'm 99% sure this is not running on the GPU. You need to get an
error free run<br>
and this has one error when it tries to compile the source for the
GPU.<br>
The error message is not correct as it got duplicated in the code.
But there is an<br>
error. One could find out what the returned error code is and look
it up<br>
in the cl.h header file. <br><div><div>
<br>
<br>
<br>
<blockquote type="cite">Hi all,
<div>Thanks to help from Dave, I finally got his example working
(perhaps) on a Windows computer using Microsoft Visual C++ on a
computer with a Nvidia GPU. I got an error about "Error trying
to load Kernel source GPU" (pasted at bottom of email along with
other warnings that I don't understand), but using something
called "GPU-Z", I was able to see that the GPU Load went from 1%
to 99%. Nevertheless, using the GPU only cut the run time in
half, and the majority of that was achieved with the BFGS
algorithm without the GPU (USE_GPU_FLAG=0). So I'm thinking the
GPU is not being utilized correctly, or my GPU is not as well
suited to this problem as Dave's, or the VC compiler is not as
well suited at GCC.</div>
<div><br>
</div>
<div>Speed comparison:</div>
<div>new newfmin with GPU: 2 minutes, 19 seconds for 442 function
evaluations.</div>
<div>new newfmin w/o GPU: 2 minutes, 37 seconds for 682 function
evaluations.</div>
<div>old newfmin time (no GPU): 5 minutes, 21 seconds for 2119
function evaluations.</div>
<div><br>
</div>
<div>I had struggles at various points along the way, including
installing the correct OpenCL stuff for my GPU, building ADMB
with or without the new newfmin file, and linking the bigmin
model to the OpenCL libraries. Everything I know about C++, I
learned from working with ADMB, so this was a valuable addition
to my education.</div>
<div>-Ian</div>
<div><br>
</div>
<div>### Here are the warnings and errors ###</div>
<div><br>
</div>
<span style="font-family:'courier new',monospace">>bigmin -mno
10000 -crit 1.e-10 -nox -nohess</span><br>
<span style="font-family:'courier new',monospace">Error trying to
open data input file bigmin.dat</span><br>
<span style="font-family:'courier new',monospace">command queue
created successfully</span><br>
<span style="font-family:'courier new',monospace">Number of
devices found 1</span><br>
<span style="font-family:'courier new',monospace">Error trying to
load Kernel source GPU</span><br>
<span style="font-family:'courier new',monospace">All buffers
created successfully</span><br>
<span style="font-family:'courier new',monospace">Program creation
code = 0</span><br>
<span style="font-family:'courier new',monospace">Program build
code = 0</span><br>
<span style="font-family:'courier new',monospace">Create Kernel2
error code = 0</span><br>
<span style="font-family:'courier new',monospace">Create Kernel
error code = 0</span><br>
<span style="font-family:'courier new',monospace">Create Kernel3
error code = 0</span><br>
<span style="font-family:'courier new',monospace">Create Kernel4
error code = 0</span><br>
<span style="font-family:'courier new',monospace">Create Kernel1
error code = 0</span><br>
<font face="'courier new', monospace"><br>
</font><span style="font-family:'courier new',monospace">Initial
statistics: 6144 variables; iteration 0; function evaluation 0;
phase 1</span><br>
<span style="font-family:'courier new',monospace">...</span><br>
<blockquote style="margin:0 0 0 40px;border:none;padding:0px">
<div><font face="'courier new', monospace"><br>
</font></div>
</blockquote>
<div><br>
</div>
<div><br>
<div class="gmail_quote">On Tue, May 15, 2012 at 10:51 AM, John
Sibert <span dir="ltr"><<a href="mailto:sibert@hawaii.edu" target="_blank">sibert@hawaii.edu</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I tried to
get it working, but did not succeed. In the process, I might
have learned a few things, so I have included a lot of stuff
in this email.<br>
<br>
It would be really helpful if others on this list would also
give it a try and share the results with the rest of us.<br>
<br>
The main problem I encountered ignorance of what (if
anything) needed to be installed on my computer. Neither the
OpenCL nor the AMD websites offer much guidance.<br>
<br>
In the end I concluded that my hardware (a Dell D series
laptop with Nvidia graphics processor purchased in 2009 and
running Ubuntu 10.04) is unsuitable, probably not supporting
double precision arithmetic.<br>
<br>
Without installing any new software the machine comes with
the executable "clinfo" that provides a lot of information
about the hardware. Sections to note are "Platform
Extensions: cl_khr_byte_addressable_store cl_khr_icd
cl_khr_gl_sharing cl_nv_compiler_options
cl_nv_device_attribute_query cl_nv_pragma_unroll"<br>
and "Extensions: cl_khr_fp64 cl_amd_fp64 ..." (without the
word "Platform"). If the graphics card supports double
precision calculations it should report "cl_khr_fp64
cl_amd_fp64", but note the ambiguity of two different
"Extensions".<br>
<br>
<br>
Emboldened, I managed to build the bigmin example without
much drama and<br>
$ ./bigmin -mno 10000 -crit 1.e-10 -nox -nohess<br>
produced the following<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Error creating command queue ret = -34<br>
Number of devices found 0<br>
No GPU found<br>
</blockquote>
<br>
So I desabled the Nvidia graphics driver and downloaded<br>
AMD-APP-SDK-v2.6-lnx64.tgz from<br>
<a href="http://developer.amd.com/sdks/AMDAPPSDK/downloads/Pages/default.aspx" target="_blank">http://developer.amd.com/sdks/AMDAPPSDK/downloads/Pages/default.aspx</a><br>
and installed it. After messing around with linker paths,
the bigmin compiled and linked, but produced the same
run-time error .<br>
<br>
At his point I concluded that my graphics card does not
support floating point calculations.<br>
<br>
A bit of work with google turned up some more information.<br>
<br>
<a href="http://developer.nvidia.com/cuda-gpus" target="_blank">http://developer.nvidia.com/cuda-gpus</a><br>
lists Nvidia graphics processors and their "compute
capability". The entry for mine is Quadro NVS 135M compute
capability 1.1<br>
<br>
<a href="http://www.herikstad.net/2009/05/cuda-and-double-precision-floating.html" target="_blank">http://www.herikstad.net/2009/05/cuda-and-double-precision-floating.html</a><br>
offers some interpretation of compute capacity:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
To enable the use of doubles inside CUDA kernels you first
need to<br>
make sure you have a CUDA Compute 1.3-capable card. These
are the newer<br>
versions of the nVidia CUDA cards such as the GTX 260, GTX
280, Quadro<br>
FX 5800, and Tesla S1070 and C1060. Thereby you have to
add a command<br>
line options to the nvcc compiler: --gpu-architecture
sm_13.<br>
</blockquote>
The ever-helpful wikipedia entry for CUDA <a href="http://en.wikipedia.org/wiki/CUDA" target="_blank">http://en.wikipedia.org/wiki/CUDA</a>
agrees<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
CUDA (with compute capability 1.x) uses a recursion-free,<br>
function-pointer-free subset of the C language, plus some
simple<br>
extensions. However, a single process must run spread
across multiple<br>
disjoint memory spaces, unlike other C language runtime
environments.<br>
<br>
CUDA (with compute capability 2.x) allows a subset of C++
class<br>
functionality, for example member functions may not be
virtual (this<br>
restriction will be removed in some future release). [See
CUDA C<br>
Programming Guide 3.1 - Appendix D.6]<br>
<br>
Double precision (CUDA compute capability 1.3 and above)
deviate<br>
from the IEEE 754 standard: round-to-nearest-even is the
only supported<br>
rounding mode for reciprocal, division, and square root.
In single<br>
precision, denormals and signalling NaNs are not
supported; only two<br>
IEEE rounding modes are supported (chop and
round-to-nearest even), and<br>
those are specified on a per-instruction basis rather than
in a control<br>
word; and the precision of division/square root is
slightly lower than<br>
single precision.<br>
<br>
</blockquote>
<br>
So you need a graphics processor with compute capability 1.3
and above.<br>
<br>
I would urge everyone to try to get this example running and
share your experiences. The opencl looks like a promising
way to parallelize some applications. The overview document<br>
<a href="http://www.khronos.org/assets/uploads/developers/library/overview/opencl-overview.pdf" target="_blank">http://www.khronos.org/assets/uploads/developers/library/overview/opencl-overview.pdf</a><br>
implies that it might be possible to tune an application to
use either GPU or multiple cores on a cluster. Unfortunately
the learning curve is steep (ask Dave) and the documentation
is thin.<br>
<br>
Happy hacking,<br>
John<br>
<br>
<br>
<br>
<br>
John Sibert<br>
Emeritus Researcher, SOEST<br>
University of Hawaii at Manoa<br>
<br>
Visit the ADMB project <a href="http://admb-project.org/" target="_blank">http://admb-project.org/</a>
<div>
<div><br>
<br>
<br>
On 05/12/2012 05:31 AM, dave fournier wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Has anyone else actually got this example to work?<br>
<br>
Some advice. Older GPU's (whatever that is) probably<br>
do not support double precision.<br>
<br>
WRT using the BFGS update on the CPU. It does not seem<br>
to perform as well as doing iton the GPU. I think this
is<br>
due to roundoff error. The CPU is carrying out
additions in a different<br>
way. It may be that with say 4K or more parameters and
this<br>
(artificial) example roundoff error becomes important.<br>
<br>
I stored the matrix by rows. It is now appears that it
should be stored<br>
by columns for the fastest matrix * vector
multiplication.<br>
<br>
<br>
<br>
_______________________________________________<br>
Developers mailing list<br>
<a href="mailto:Developers@admb-project.org" target="_blank">Developers@admb-project.org</a><br>
<a href="http://lists.admb-project.org/mailman/listinfo/developers" target="_blank">http://lists.admb-project.org/mailman/listinfo/developers</a><br>
<br>
</blockquote>
_______________________________________________<br>
Developers mailing list<br>
<a href="mailto:Developers@admb-project.org" target="_blank">Developers@admb-project.org</a><br>
<a href="http://lists.admb-project.org/mailman/listinfo/developers" target="_blank">http://lists.admb-project.org/mailman/listinfo/developers</a><br>
</div>
</div>
</blockquote>
</div>
<br>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
Developers mailing list
<a href="mailto:Developers@admb-project.org" target="_blank">Developers@admb-project.org</a>
<a href="http://lists.admb-project.org/mailman/listinfo/developers" target="_blank">http://lists.admb-project.org/mailman/listinfo/developers</a>
</pre>
</blockquote>
<br>
</div></div></div>
<br>_______________________________________________<br>
Developers mailing list<br>
<a href="mailto:Developers@admb-project.org" target="_blank">Developers@admb-project.org</a><br>
<a href="http://lists.admb-project.org/mailman/listinfo/developers" target="_blank">http://lists.admb-project.org/mailman/listinfo/developers</a><br>
<br></blockquote></div><br></div>