LabWindows/CVI Idea Exchange

cancel
Showing results for 
Search instead for 
Did you mean: 
layosh.b

Support HPC using GPU computing

Status: New

My 2000 dollar worth, run of the mill desktop PC has 4 teraflop of brute computation power hiding in 4 GPUs. None of which is accessible for my programs I develop in Labwindows.

Shame!

With the release of the new OpenCL it is possible to generate a "platform independent" GPU computing library. That would place Labwindows on the same or better footing than Labview that already has some GPU computing support.

The advantages are obvious: huge gain in data processing speed, real-time application with streaming data,  pattern-recognition (video) applications, image processing, data-parallel tasks in (technical) modeling arena.

I am playing with some optimization algorithms (genetic algorithms, evolutionary algorithms) that benefit and show amazing gains since they are ideal for data-parallel applications! Currently working on the specs for a new type of controller that would optimize several parameters to figure out the state of the tissue culture (expanding, producing, overgrowing, etc.) to maximize productivity and to calculate the optimal settings using evolutionary algorithms... Any complex process control could take advantage of this kind of applications -currently not available- because of computational limitations. Had a previous optimization task that would have taken 150,000 years to complete using a brute force algorithm on a "monofilament" CPU-based application. Converted it to a genetic algorithm and it gives me a good enough solution in 3-4 days on the same standard PC. Now, with GPU, that problem could be solved in fifteen minutes while expanding the evolutionary depth and finding better solutions using even more complex fitness functions.

Ask yourself what do you want: tinkering with the conveniences of the IDE that already does the job well enough; or open the door to new landscapes that could be conquered by using the simple elegance and effectiveness of LabWindows and the power of GPUs?

 

4 Comments
NoVIsonme
Member

Is it only me that is using CVI as a fairly simple, robust and easy to understand solution for machine control and instrumentation? Going through these ideas there are quite a few bemoaning  the fact that it isn't Visual Studio, with accompanying resources. Now heres one that wants it to be a cut-price MatLab!

 

Is seems to me that this is a happily mature and well-rounded package. The lack of bells and whistles (and GPU suppport!) is a Good Thing. What it does, it does extremely well and is as far as I can tell free from bugs and gremlins. Can we please keep it that way?

gdargaud
Active Participant

You should be able to link .o generated by CUDA with CVI, no ? No need to add complexity.

40tude
NI Employee (retired)

What about an OpenCL library ?

Regards, Philippe

Regards, Philippe proud to be using LabWindows since version 1.2
// --------------------------------------------------------------------------------------------
Layosh
Member

With the new Kepler architecture andCUDA 5 toolkit capabilities (nclduingthe compiler support) the GPU is more accessible than ever! The issue is that other platforms move toward GPU support (including labview) and CVI rapidly becomes obsolete, far less powerful than the competitors. this also means that my investment into learning CVI loosing its value. If I need more processing power and move to new territories and solve computationally expensive problem, I have to leave CVI and eat the ugly frog: have to learn  Visual Studio instead of relyiong on a standardized, easy to undeerstand CVI environment and IDE. No question, CUDA works under visula Studio and Eclipse IDE too,finally  managed to compile under both; but it is a nightmare for me to comb together all teh pieces, the libraries, the dependencies, etc. between teh system. I do not want to spend my time to walk on this edge and chase down solutions for each of the problems that need to be solved before have something rally useful. CUDA support for CVI could be the solution, and NI is so good at making libraries simple and very useful, it would boost the market for it. Remember: only in the first week after making it public, more than 1 and half million programers downloaded CUDA 5.  that is a market, a big market, even though CUDA is difficult in itself. Combining CUDA with teh CVI tools, boosting some of the CVI libraries with GPU would give a superb tool into the hand millions of engineers!!! Millions!