GPU Computing

Showing results for 
Search instead for 
Did you mean: 

Nonlinear fit (Lev-Mar) with GPU computing

Dear all,

I'm working on a private software where the key function is nonlinear fit.

Small intro: Basically, I have N matrices (images) and 2 unknown matrices of velocitiy fields (Vx, Vy). N(i) image comes from N(i-k) (k is integer) according to Vx, Vy matrices. By parameterizing Vx and Vy, having initial guess I run Lev-Mar Nonlinear Fit VI with a certain VI function. Everything works great, I have good solutiion for Vx and Vy, however, for good resolution (having more parameters of my Vx, Vy matrices) it takes quite decent time to calculate.

I'm not familiar with cuda programming but if the answer on the question is positive, I'll have to learn about it (at least in labview sense).

Question: Is it possible to integrate GPU computing for a such problem?

0 Kudos
Message 1 of 2

Given the information, the answer is likely yes but you have to understand the following conditions:

  1. The toolkit is designed for use by someone that is *already* familiar with CUDA programming. So, you must have more than just a passing knowledge of CUDA and how it works. In this sense, the LabVIEW offering is just a set of APIs - not a true GPU programming environment from a G perspective.

  2. The toolkit includes standard data types for both the FFT and linear algebra functions (aka operations exported by CUFFT and CUBLAS libraries). It does not have built-in support for other data types that are offered by other CUDA libraries. If you require those types, then you would have to use the toolkit's SDK features to create those data types for use with your external custom CUDA functions. This would be in addition to the LabVIEW wrappers needed to call your functions from G.
Message 2 of 2