LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Build a $100,000 labview supercomputer?

Sounds like an application for facial recognition in a large crowd.
Message 11 of 21
(1,368 Views)
nothing that innocent.

global variables make robots angry


0 Kudos
Message 12 of 21
(1,324 Views)
The basic idea is what Ben said: "LV will break up the code into threads and then use the OS to get the work done."

As has been noted, LabVIEW automatically multi-threads your application (making it very simple for a programmer to take advantage of multi-core computing).  There are of course methods to programming that will maximize your return, but the nature of graphical programming tends toward incidental parallel operations which will multi-thread and take advantage of multiple cores with really no effort required from the programmer.

As for the maximum number of cores/processors, you would have to look at the operating system.  I believe Windows XP supports up to 32 cores (you may want to check me on that).  We actually had a demo at NI Week
where we had a single computer with 4 quad-core processors (16 total cores) running the The LabVIEW Real-Time operating system.  Real-Time is probably not necessary for you (it doesn't sound like you need a deterministic OS), but it was a good demonstration of a high core-count system.

There are a lot of good resources that explain a lot of this at http://www.ni.com/multicore/.  I think this download is also very good.

I hope this helps!
Brian A.
National Instruments
Applications Engineer
Message 13 of 21
(1,288 Views)
Thanks Brian for those numbers. I should expand one one thing that can be done that is "not taught in LV Basics".
 
If you have collection of things (think multiple microphones) that you want to perform the same anaysis on, the "natural" thing to do is is to combine all of the microphone signals into an array and use a "lean and mean" sub-VI in a For Loop to do the number crunching. But if you want to use all of your processors to do the number crunching that construct would force each microphone to be processed one after the other. I picked up a lot of speed by using eight arrays (each with one microphone worth of info) and then use a "re-entrant" sub-VI. THe code looks a little weird and if you did not know what I was doing you may say "Ben! You can simplify your code .... For Loop!"
 
So the idea still is the same. If you want to use multiple processors in parallel, make sure your code "looks parallel".
 
Ben
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 14 of 21
(1,280 Views)
Brian_A, mind sharing a little info on the specific hardware that you were running on? Was it a shared memory system, distributed memory, or hybrid?
 
Thanks,
root

global variables make robots angry


0 Kudos
Message 15 of 21
(1,261 Views)
Well, the demo was not my own so I do not know the specifics, and can't find any at the moment (but I can let you know if I do).  I would imagine you could search around the internet to find high-performance computing solutions for sale (I know it's not too hard to find multiple socket mainboards), along with considerations regarding memory management.  There is also some duscussion on memory schemes in the download I mentioned in my last post.

To expand on Ben's comments, check out the "Parallel Programming Strategies for Multicore Processing in LabVIEW" section of the Multicore Programming Fundamentas tutorial in our Developer Zone.  It covers the three main ways to program in order to take maximum advantage from a multi-core system: Task Parallelism, Data Parallelism, and Pipelining.
Brian A.
National Instruments
Applications Engineer
Message 16 of 21
(1,229 Views)
Thanks for the help. I think I'll just take a stab at ordering an off the shelf high performance computing solution that will support several quadcore processors and we'll try to benchmark various tasks to see where the bottleneck is before buying something more ambitious. Thanks again for all the advice.
 
root
 
 
 

global variables make robots angry


Message 17 of 21
(1,209 Views)

Hi root,

If you get a chance try to post back to this thread about what you found.

Thank you,

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Message 18 of 21
(1,202 Views)

Hey Root,

 

I don't know why I didn't think of this before, but check out the NI Week Keynote video from Wednesday, August 6 titled "Introduction and Very Large Telescope" (it's the first one listed under Wednesday).  About 6 min. in they start talking about the telescope application, which shows off some awesome LabVIEW computing power.  Definately take 10 min. to check it out.  There is a bit of info in the video on specific Dell hardware used, and for this app. they are definitely trying to get the most computing power possible!  SPOILER ALERT: at the end of the video they are running LabVIEW on a 128 core machine!!

Brian A.
National Instruments
Applications Engineer
0 Kudos
Message 19 of 21
(1,167 Views)

Root,

 

As has been mentioned in this post, we were able to build a 16 core system running LabVIEW RT. We actually ran a few machine vision algorithms with our NI Vision Development Module (VDM) library and were able to see how the threads were balanced evenly among the 16 different cores (in 4 different cpu's). There is no theoretical limit to the number of cores LabVIEW and VDM can take advantage of. As it has also been mentioned, all LabVIEW and VDM are doing is chunking up the processing into as many threads as you have parallel code sequences and the OS is balancing the load between the cores and sending an incoming thread to the appropriate core (actually, with LabVIEW Real-Time you can choose which processor to execute on using RT's API or Timed Structures). The constraint you are going to come up against is a physical constraint. I have no idea about how many cores you can load up in a single motherboard at the time. The 16 core machine was pretty impressive to me and I didn't build it, so I can't give you advice on that. If you believe you can have a large number of tasks executing in parallel and even 16 cores might not do it for you, you might want to try parallel computing through "distributed computing" rather than "multicore computing". This is much more scalable at the time than trying to add more cores to a single computer system but it does require you to write code to coordinate thread dispatching and results gathering. You also have to regroup the results after the processing but I would say the major downside to this is network latency. Of course, with the budget you are mentioning, you can easily create a local Gigabit ethernet subnet and minimize this impact. The way you could dispatch processing to other computers/targets in LabVIEW is through VI Server.

Just some ideas to consider. By the way, the vision algorithms which were multi-cored internally for version 8.6 of the NI Vision Development Module are:

•Convolution
•Cross Correlation
•Concentric Rake
•Gray Morphology
•Image Absolute Difference
•Image Addition
•Image Complex Division
•Image Complex Multiplication
•Image Division
•Image Expansion

•Image Logical AND
•Image Logical OR
•Image Logical XOR
•Image Multiplication
•Image Re-sampling

•Image Subtraction
•Image Symmetry
•Morphology
•Nth Order
•Rake
•Spoke

 

Sincerely,

 

Carlos

Software Group Manager

National Instruments

Message 20 of 21
(1,154 Views)