08-09-2008 10:27 AM
08-11-2008 09:13 AM
08-11-2008 02:19 PM
08-11-2008 02:34 PM
08-11-2008 03:58 PM
08-12-2008 11:18 AM
08-12-2008 02:16 PM
08-12-2008 02:54 PM
Hi root,
If you get a chance try to post back to this thread about what you found.
Thank you,
Ben
08-13-2008 10:14 AM
Hey Root,
I don't know why I didn't think of this before, but check out the NI Week Keynote video from Wednesday, August 6 titled "Introduction and Very Large Telescope" (it's the first one listed under Wednesday). About 6 min. in they start talking about the telescope application, which shows off some awesome LabVIEW computing power. Definately take 10 min. to check it out. There is a bit of info in the video on specific Dell hardware used, and for this app. they are definitely trying to get the most computing power possible! SPOILER ALERT: at the end of the video they are running LabVIEW on a 128 core machine!!
08-13-2008 11:01 AM
Root,
As has been mentioned in this post, we were able to build a 16 core system running LabVIEW RT. We actually ran a few machine vision algorithms with our NI Vision Development Module (VDM) library and were able to see how the threads were balanced evenly among the 16 different cores (in 4 different cpu's). There is no theoretical limit to the number of cores LabVIEW and VDM can take advantage of. As it has also been mentioned, all LabVIEW and VDM are doing is chunking up the processing into as many threads as you have parallel code sequences and the OS is balancing the load between the cores and sending an incoming thread to the appropriate core (actually, with LabVIEW Real-Time you can choose which processor to execute on using RT's API or Timed Structures). The constraint you are going to come up against is a physical constraint. I have no idea about how many cores you can load up in a single motherboard at the time. The 16 core machine was pretty impressive to me and I didn't build it, so I can't give you advice on that. If you believe you can have a large number of tasks executing in parallel and even 16 cores might not do it for you, you might want to try parallel computing through "distributed computing" rather than "multicore computing". This is much more scalable at the time than trying to add more cores to a single computer system but it does require you to write code to coordinate thread dispatching and results gathering. You also have to regroup the results after the processing but I would say the major downside to this is network latency. Of course, with the budget you are mentioning, you can easily create a local Gigabit ethernet subnet and minimize this impact. The way you could dispatch processing to other computers/targets in LabVIEW is through VI Server.
Just some ideas to consider. By the way, the vision algorithms which were multi-cored internally for version 8.6 of the NI Vision Development Module are:
•Convolution
•Cross Correlation
•Concentric Rake
•Gray Morphology
•Image Absolute Difference
•Image Addition
•Image Complex Division
•Image Complex Multiplication
•Image Division
•Image Expansion
•Image Logical AND
•Image Logical OR
•Image Logical XOR
•Image Multiplication
•Image Re-sampling
•Image Subtraction
•Image Symmetry
•Morphology
•Nth Order
•Rake
•Spoke
Sincerely,
Carlos
Software Group Manager
National Instruments