GPU Computing

Showing results for 
Search instead for 
Did you mean: 

LabVIEW GPU Computing

"G Calling GPU. G Calling GPU. Come in, GPU.”

The GPU Analysis Toolkit has landed!

Try out a pre-release version of the new GPU Analysis Toolkit for LabVIEW 2012. The toolkit runs on Windows 32-bit  and 64-bit platforms. Features include:

  1. A set of VIs for controlling NVIDIA GPUs and their resources using CUDA v4.0 or later.
  2. A set of VIs for calling NVIDIA CUBLAS and CUFFT library functions.
  3. A G-based SDK for calling custom GPU functions from a LabVIEW application.

Visit and choose GPU Analysis Toolkit from the drop-down list.

Message 1 of 52

Thank you! Such wrapper I have wait sooo looong time...

But can't use it, because I have CUDA 1.1 only.

Is it possible to preserve backward compatibility with CUDA 1? Or something specific from CUDA 2.2 was used inside?


0 Kudos
Message 2 of 52

Very nice. I will test it^^

0 Kudos
Message 3 of 52

I've found that examples I've created in the past using CUDA v1.x were not able to run using CUDA v2.x without recompiling my DLL. In effect, this used the v2.x import libraries that tie into the CUDA runtime system. You can install this module on your system and try to run the Black-Scholes example (European Call without any work on your part. Either way, I'd appreciate it if you posted the results here (or on the general discussion thread for GPU computing).

0 Kudos
Message 4 of 52

Does the framework support running in emulation mode for developing on computers without a CUDA-capable GPU?

Message 5 of 52

I've never tried it. All of my development systems have NVIDIA hardware compatible with CUDA. In theory, it should work. If you try it, please post the results.

0 Kudos
Message 6 of 52

That all sounds exciting. Unfortunately I don't have any GPU on my laptop, and I am still in Austin, flying back tonight. In the meantime, it seems that the "discuss" link for this topic on seems to be broken. Maybe somebody could fix it....

On my GPU computer at home I have LabVIEW 8.6 and LabVIEW 2009 32bit and 64bit installed at the same time. How does the GPU installer know where to install? Can I choose? (I know 64bit is out, but how is the decision made for the two 32bit versions? Will both be upgraded?).

0 Kudos
Message 7 of 52

I'll look into the discussion link issue. As for the installer, this is somewhat unique from the typical LabVIEW approach. All files are installed in an LVCUDA directory paralleling the (default) CUDA directory for the Toolkit (i.e. c:\CUDA).

You can copy the LVCUDA folder to new locations (e.g. under a specific LabVIEW location) to support multiple LV versions. The VIs are designed to look for the support DLLs in a relative location.

The VIs were compiled with LV8.6 so you'll see the dirty asterisk on load in LV 2009. If you try to move the folder, please post your results as I have not investigated all the permutations.

I did not limit the installation to 32-bit. In some cases, it is possible to invoke 32-bit apps within the 64-bit OS. However, I'm not sure the NVIDIA drivers that support CUDA would be accessible in that way.

0 Kudos
Message 8 of 52

What is the difference between writing my own CUDA DLL and calling it from Labview versus using this library?  Obviously there are some conveniences to use the LV library.  But besides the obvious, what advantage is there?

0 Kudos
Message 9 of 52

This is explained in the document LVCUDA - Why Do I Need A Compute Context. In this reference, there is one type of GPU computing (Resource Independent) that can use the CUDA DLL without using the LV library.

Note that using this library also requires recompiling your DLL using our NICompute context layer. Together, this allows you to call a CUDA-based DLL function from a LabVIEW diagram where (a) the correct GPU device is called and (b) the (cached) parameters on that device are valid.

0 Kudos
Message 10 of 52