From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.

We appreciate your patience as we improve our online experience.

GPU Computing

cancel
Showing results for 
Search instead for 
Did you mean: 

I've got this error message when starting GPU analysis toolkit. However, I could not find any solution.

Hello everyone.

I'm a newbie in this GPU community.

I've studyied this a few days ago.

I hardly know about the definition and so on.

Furthermore, I dont have any idea about C++, I have been using just LabVIEW and MATLAB.

At this moment,

I created a matrix-matrix computation VI by using GPU.

It works fine so far.

However, suddenly it did not work properly.

It displays an error message like this.

::: error message :::

CODE : -359631

-----------------------------------

call to cudaMemcpy in  cudart32_50_35.dll.
<ERR>NVIDIA provides the following information on this error condition:

<b>code:</b>
cudaErrorUnknown = 30

<b>comments:</b>
This indicates that an unknown internal error has occurred.

<b>library version supplying error info:</b>
4.1

The following are details specific to LabVIEW execution.

<b>library path:</b>
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\bin

<b>call chain:</b>
-> lvcuda.lvlib:CUDA CSG Device Ptr.lvclass:Upload 1D Data.vi:1
-> [Calculation] CUBLAS.matrixComputation.Manual.ver20130711.vi

<b>IMPORTANT NOTE:</b>
Most NVIDIA functions execute asynchronously. This means the function that generated this error information may not be the function responsible for the error condition.

If the functions are from different NVIDIA libraries, the detailed information here is for a unrelated error potentially.

-----------------------------------

I coded VI of front as

kl.PNG.

At the brokepoint(filled red dot), the error message appears.

Is there any way that initializing GPU not using VIs?

First time, it works fine or not.

After it, it does not work.

Although I have changed the position of initialization of cuFFT and cuBLAS, it also does not work.

So, I coded other one like the attached file.

First time, it works fine.

However, when the error message appeared, it also did not work.

kll.PNG

How can I resolve this error?

above VI for matrix computation, it works well from another computer.

I'm using

> nVIDIA NVS 160M

> win 7 32bit

> RAM 3GB

> LabVIEW 2012 32bit

> GPU analysis toolkit 2012

If there is any comment, I really appriciate you!

Have a nice day.

Thank you in advance.

0 Kudos
Message 1 of 4
(9,392 Views)

Your simple case has multiple issues that may result in undefined behavior in the CUDA runtime engine:

  • You don't wire the size (in elements) input to the Allocate Memory VI which creates an empty GPU buffer of CSG elements,
  • You don't wire the size (in elements) input to the Initialize Memory VI which results in a request to initialize 0 elements of the empty GPU buffer,
  • You configured the CUFFT to perform a batch 1D-FFT operation for 10 signals, each of which is supposed to have 10 elements.

When I correct these on the diagram (see snapshot), I do not get any errors after repeated runs.

[SimpleCase] Initialize GPU - updated.png

It's evident from your actual application VI snapshot that you wired sizes to these VIs in that scenario. However, if the sizes you use are not correct, it's not uncommon to crash the CUDA runtime or the device's compute context by asking it to do an function on data of size N while the GPU buffer storing the data has less than N elements.

The cudaErrorUnknown(30) covers this type of situation as it's an expectional occurence. Because CUDA's C API is pointer-based, it's the user's responsibility to (a) configure the operation on the proper data sizes and (b) preallocating GPU buffers storing the data to at least the sizes passed to the function. If you don't, the function will crash just as it would if you programmed it in C.

In my experience, most of these cases do not cripple the device so that the system (or OS) has to be rebooted. It's enough to restart the CUDA runtime engine. This is done by exiting LabVIEW and restarting it. There is no way to unload CUDA w/out shutting down LabVIEW because it is implicitly loaded by the environment that calls it (in this case LabVIEW).

It's also worth noting the 'IMPORTANT' portion of the error string you've copied in your post. Unless you've serialized all GPU functions (including those running in parallel from other VIs in your app), the location of your break-point doesn't mean the GPU VI which gave you the error message is also the one responsible for the error condition. You can use the sequence structure along w/ wiring the error cluster terminals to enforce a common execution path which helps debug the problem.

That's about as much help as I can provide given the code snapshots in your post.

Darren

Message 2 of 4
(6,505 Views)

Dear Darren,

Thank you for answering very kindly.

I've tried as your comment.

It also displayed an error message but not the above one.

Below is the error message what I tried to write before.

asd.PNG

Message is as

---------------------

Error -359631 occurred at call to cudaMalloc in  cudart32_50_35.dll.

Possible reason(s):
NVIDIA provides the following information on this error condition:

code:
cudaErrorMemoryAllocation = 2

comments:
The API call failed because it was unable to allocate enough memory to perform the requested operation.

library version supplying error info:
4.1

The following are details specific to LabVIEW execution.

library path:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\bin

call chain:
-> lvcuda.lvlib:CUDA CSG Device Ptr.lvclass:Allocate Memory.vi:1
-> [SimplestCase] InitializeGPU.vi

IMPORTANT NOTE:
Most NVIDIA functions execute asynchronously. This means the function that generated this error information may not be the function responsible for the error condition.

If the functions are from different NVIDIA libraries, the detailed information here is for a unrelated error potentially.

---------------------------

I think this error message concerns memory initialization.

This error message appears several time.

At this time, it appears when I run LabVIEW, opened the VI, and run the VI.

I think it may be damaged the cuda related dll file(cudart32_50_35.dll), I re-installed CUDA. However, it didn't work but displayed the error message again.

Thank you for the comment.

And I hope that the error is resolved!

Thank you in advance.

Albert

0 Kudos
Message 3 of 4
(6,505 Views)

Dear Darren,

Thank you for the answer.

I finally found why the error message displayed.

Actually, I dont know the exact reason.

However, it works when I added some VIs before "initialize cuFFT and cuBLAS" with specify the element size to initialize memory and allocate memory.

Thank you!

4564.PNG

However, I tried to increase the size of the array what I want to calculate.

When I use the cuBLAS GEMM, above VI works fine under this condition: the size of matrix A is 100x30 and B is 30 x 1, C is 0x0, and D is 100 x 1.

However, it returns error message when I increase the matrix size as A: 3000 x 1000, B: 3000 x 1, C is 0x0, and the result defined as D is 1000 x 1.

The NVS 160M what I am using graphic card can make the 2D array size as 65536 x 32768(can check from "devicequery": Max texture dimension size(x,y,z)).

I will try something for resolving this error.

Albert

0 Kudos
Message 4 of 4
(6,505 Views)