09-30-2016 05:06 AM
Using Windows 10/64bits + labview 2014/64bits + CUDA 8.0 for windows10 x86/64 + GPUAnalysis64bits labview toolkit.
Running "Get GPU Device Information.vi" , the "All Devices" tab control page indicates 4GB for Total Global Memory where as the graphic card gets 8GB.
The cuDeviceTotalMem fonction seems to be 32bits limited. I don't understand. Is anyone can explain to me where is my mistake ?
Thanks a lot ! (Newbie inside )
09-30-2016 08:39 AM
This was a limitation of the CUDA toolkit interface from CUDA v4.0. Yes, the size is limited to 32-bit. The function in the GPU toolkit has not been updated to provide a 64-bit value. I don't recall whether the size is clipped or modulus as I saw the behavior from the first Tesla card that had 6GB of memory.
09-30-2016 08:50 AM
Thanks MathGuy ! For cuFFT, cuBLAS and for the other functions of the GPU analysis palette, can I give some graphic ram value above 4GB to these functions ? Or do the GPU analysis palette can only manage 4GB for GRAM ?
Sorry for my english langage (french) and for the poor level of the questions .
09-30-2016 09:07 AM
I don't recall where the 32-bit limitation occured in the GPU toolkit interfaces. There's a subtle difference between functions which get a memory address vs a memory length/size. So, it may be possible that functions like cuFFT may support working w/ memory larger than 4GB because they use a (64-bit) memory address to reference the memory.
However, the function(s) allocating memory and returning an address to the memory block may or may not have the 32-bit limitation. In LabVIEW, you can check the parameter's data type (e.g. I32/U32 or I64/U64) on it's front panel to see what it supports.
09-30-2016 09:19 AM
Ok the fact that cuFFT and cuBLAS support 64-bit memory address as you mention is enough for me even they don't support working with an amount of memory larger than 4GB.
The best way for me is to try when there's a doubt.
Thanks a lot !