‎12-08-2008 11:35 AM
Hi All,
I'm looking for an easy way to offload some processing to my graphics card because my CPU is getting overloaded with repeated simple operations. I wanted to know if I could do it without getting into DirectX or OpenGL myself.
My question is, Do the image processing operations in the NI Vision module take advantage of hardware acceleration or are they all done in software on the CPU?
I found this thread which seems to indicate that LabView does not use graphics hardware acceleration, but it does not speak to the NI Vision VI's.
Thanks,
Greg
‎12-08-2008 12:22 PM
Hi, Greg,
In the thread which you have found nothing related to NI VIsion.
In general behind of NI Vision is just few windows DLLs. They was partially optimized in Vision 8.6 for MultiCore processing (only some functions).
Refer to NI Vision 8.6 Development Module Readme.
I have compared several functions, and got following results:
So, if you needed faster processing, there are different possibilities, how to do this:
1. Use other high performance Library, for example, Intel Integrated Performance Primitives
2. Develop your own DLL for simple operations (compile it with Intel Compiler).
3. Use GPU (in this case I will suggest to try NVidia CUDA, but there are no "ready for use" solution. You should found how to prepare DLL with CUDA compiler, then call it from LabVIEW. I tried this, it works in general).
Andrey.
‎03-12-2009 04:15 PM
Item #3 is interesting:
3. Use GPU (in this case I will suggest to try NVidia CUDA, but there are no "ready for use" solution. You should found how to prepare DLL with CUDA compiler, then call it from LabVIEW. I tried this, it works in general).
I was looking to attempt something similar, ie, using a GPU to offload the host machine CPU. When you developed the DLL, how did you indicate to the compiler what ran on the CPU and what ran on the GPU? Does LV recognize the DLL? If so, did it execute as seamless as nVidia claims? Thanks for you attention...
‎03-12-2009 04:31 PM
Lebecker,
We did end up implementing this solution, though I did not do it personally, so I do not know all the details. We created DLLs in C that interface with the NVidia CUDA calls. Then called those DLLs in labview. That way, just individual operations encapsulated in the DLLs were passed to the GPU. I'm not sure how seemless NVidia clamis, but it did work without a problem, and we got the performance boost we were looking for.
‎03-13-2009 10:18 AM
Hi everyone,
I do not believe there is any built-in functionality in LV or the NI VIs to offload processing to the GPU. The best bet is the CUDA path.
‎03-15-2009 04:57 AM
Well, now that OpenCL is an official standard, maybe it's something which could be supported natively in future versions of LabVIEW.
If NI is serious about continuing to exploit the inherently parallel nature of LV, this would seem to be a logical progession to me.
Shane.
‎03-16-2009 09:32 AM
‎03-16-2009 10:23 AM
Done.
I would have thought that the R&D guys would already be aware of something like this but just in case......
Shane.
‎03-17-2009 10:27 AM
‎04-24-2009 02:59 AM
Nvidia has released OpenCL drivers for their graphics cards.
http://www.tomshardware.com/news/Nvidia-Cuda-OpenCL-SDK,7596.html
Shane.