08-07-2008 06:05 PM
08-07-2008 07:20 PM
08-08-2008 06:18 AM
08-08-2008 08:26 AM
I agree that you should check with NI now that NI WEek is behind us to see what they want to sell us today.
Another idea:
DIstributed processing - Curtis-Wright offers a product called SCRAMNet that is shared memory linked using fiber. The throughput absolutely blows away anything else I have used, but I digress. If you split the work of your app across multiple high-end processors, and pass the analysis works to seperate machines, the processings cpabilities are limited only by your budget. In your case you may want to skip using the fiber switch they sell (unless you need the redundency) since the switch alone will set you back about $20K. ![]()
Ben
08-08-2008 09:58 AM
08-08-2008 10:48 AM
08-08-2008 12:04 PM
I think I understand what you are saying, but the thing is that I've read all this hype about labview supporting multicore and multithread processing. Labview is an inherently parallel language, ideal for parallel programming. It can already automatically break up computation between cores on a quad core processor (or an eight core processor if you're on that crazy eight core mac). It does so without the programmer having to specifically delegatae which task goes to what and worry about timing issues between processor cores!
That's really neat! From the literature I've read from NI, all you have to do is make sure you wire up your block diagram in such a way as to not force linear execution (for instance by running everything in series through a flat sequence) and then labview dynamically handles the load balancing between cores. So this begs the question: If labview can do automatically do this with one eight core processor, then can it do this with eight single core processors? Can it utilize sixteen single core processors? Four quad core processors? A hundred quad core processors!?!
It seems like it should be able to allow a user to build a scalable supercomputer without having to go through the pain of creating some complicated control hub program to handle task distribution and timing because labview already does that for you.
Does anyone know if the multithreading capability in labview can handle multiple processors (not just multiple cores) and if so, how many? How do I build a computer with thirty or forty processors running in tandem that would be well suited to parallel processing in labview?
What OS would I use? what kind of architecture should the computing platform have? Should I use special labview software or applications? What kind of data transfer constraints are there? how much memory do I need and how should it be shared by the processors? Can I use multiple multicore processors? Would it be better to use all single core or all multicore?
I don't know what I am doing here; all I know is that NI says that labview 8.6 can run super efficiently on a multicore system and sure makes it sound like labview can run super efficiently on a multi processor system. Cool! Please give me a solid example of how to buy or build such a system!
Anybody know?
Please Help!
08-08-2008 12:58 PM - edited 08-08-2008 01:01 PM
Hi Root,
I have managed to load up 8 CPU with LV apps but it takes a little extra effort to harnes the power of all of them. In a nut-shell, you want to have a lot of parallel code in your number crunching routines. In my case I broke up arrays into groups of eight and called all of my sub-VIs as reentrant.
LV will break up the code into threads and then use the OS to get the work done.
Ben
08-08-2008 01:23 PM
Root Canal wrote:
Does anyone know if the multithreading capability in labview can handle multiple processors (not just multiple cores) and if so, how many? How do I build a computer with thirty or forty processors running in tandem that would be well suited to parallel processing in labview?
08-08-2008 03:10 PM - edited 08-08-2008 03:11 PM