From 04:00 PM CDT – 08:00 PM CDT (09:00 PM UTC – 01:00 AM UTC) Tuesday, April 16, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

GPU Computing

cancel
Showing results for 
Search instead for 
Did you mean: 

Which NVIDIA graphics card models does the LabVIEW GPU Analysis Toolkit support?

I recently wanted to buy an Nvidia graphics card for FFT acceleration. Does anyone know which models the LabVIEW GPU Analysis Toolkit supports. The picture is the workstation configuration I chose. There are two Nvidia graphics cards in it.

 

0 Kudos
Message 1 of 5
(4,474 Views)

The Quadro 2200 appears to be a good GPU for CUDA as it is rated pretty high by NVIDIA for its "Compute Score" for CUDA. But it is not clear to me that the LabVIEW GPU Analysis Toolkit is even really supported any more.  I put in a support request to NI on the subject and got no response. After trying to get it working for a week or so, I have switched to trying to learn to write C++ CUDA code instead.

 

0 Kudos
Message 2 of 5
(4,462 Views)

From what learned you should be able to use any hardware comes out after 2016, Nvidia made their gpu have backwards compatible for earlier SW.

0 Kudos
Message 3 of 5
(4,455 Views)

Hi,

 

If you have Python programming skills, you can use the different GPU accelerated librariries from the Python eco-system and call the Python script from LabVIEW (from LV 2018).

 

I have had decent results with the Numba librairie (http://numba.pydata.org/) which is quite well documented. You can find quite a few exemples on GitHub.

 

Probably it is not as fast as C++ Cuda but it can be handy for prototyping.

0 Kudos
Message 4 of 5
(4,450 Views)

This overlaps with some discussion I had over here:

https://forums.ni.com/t5/GPU-Computing/CUDA-driver-error-359631-when-running-Multi-channel-FFT-examp...

 

The P2200 is a good card. I think it's great for getting you started.

 

In considering a new card, you'll first want to think about how much GPU memory you'll need.  The CUDA FFT documentation has some details here. (However, you can have the input/output data reside on page-locked CPU RAM, and/or setup the FFT workplan areas in CPU RAM, and/or use multiple GPU cards.  I've personally been able to use the first two workarounds to process large 3D FFTs for image volume deconvolution.)

 

Secondly, you'll want to think about what version of CUDA you'll need (older cards won't have the latest features, or be supported by the latest version of CUDA).  and means looking at what Compute Capability you need. Especially important for FFT work will be the floating point operations.

 

In general the TITAN cards are specially designed for lots of CUDA operations. But they are also expensive. Clock speeds and processing power for different data types are listed here. Also factor in some overhead for transferring the data to the card, and transferring it back. 

 

Thirdly, if you may want to make sure you can use a higher level library (like that python library), if you'd rather abstract all of the CUDA calls to something easier to start with.

 

 

0 Kudos
Message 5 of 5
(4,441 Views)