We currently have a core VI that has sub-VIs that are called by reference and run in parallel to the core VI. The Windows Task Manager will display the core VI as the only process running. Does anyone know of a way to break the core VI down into components that will run as separate windows processes, maybe using the VI Server or some other method?
I've seen one way that is documented where the Functions -> Connectivity -> Libraries and Executables -> System Exec.vi (Windows Command Line) is used to call an executable that can run in parallel to the executable that performs the System Exec.vi call. I think the problem with this is that the executables that are spawned will not be able to communicate with the initial executable (via queues/notifiers/VIGs) since they are programs with their own memory spaces.
Thanks for any comments in advance.
Short answer is - not in the development environment but you can through building executables and using general strategies such as TCP/IP in some form to communicate between them.
Long answer - The dev environment runs your code and hosts the runtime thus there is only one process. You can, however, build executables from your components and each executable has its own runtime. You can then implement communication between them using any number of messaging strategies and frameworks that are around; most of them are fundamentally built on top of TCP/IP. You can't use LabVIEW "runtime" abstractions such as queues, notifiers etc. since, as you point out, they are only unique within an application instance and are designed for intra-process communication. There are Windows mechanisms around to allow processes to share memory or signal events and in theory you could use that inside LabVIEW. I would suggest though that the complexity and risk should make that your last option.
I have a question though - what are you hoping to gain from running multiple Windows processes? Is this purely to enable some form of external debugging (eg. via task manager to view CPU usage etc.) and have you looked at the Desktop Execution Trace Toolkit (DETT) as an alternative?
LabVIEW's inherent Parallel Processing nature makes "doing it all in a single LabVIEW Routine" probably the optimal method, in terms of (a) ease of development and documentation, (b) ease of maintenance, (c) ease of debugging, (d) and (probably) speed of execution. I've certainly had routines with >50 spawned Asynchronous Loops running at the same time acquiring video data ... (three loops control one camera -- we're not running 50 cameras).
Thanks for replying tyk007. Your answer is pretty much what I expected but it's nice that it doesn't seem like I missed something.
The gain from running multiple Windows processes is still a question. It would provide the portions of code to run on any specified core and with a certain priority on that core. Some process manager could oversee the processes and monitor their diagnostics and perform some burden unloading if necessary or restarting. I don't really have any major counter arguments against it.
This link on using Queues and Semaphores across Executables was found today. https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019NE3SAM
It says the semaphores will be accessible via the VI server. It stops mentioning queues though but I got the impression that queues should work too. I'm planning to write a simple producer-consumer and break it apart as separate exes with queues as front panel inputs and see what happens.
I use Queues all the time with Parallel VIs -- I don't think I've ever used a Semaphore (nor had the need for one). I have, on (rare) occasions, used a Notifier ...