LabVIEW Development Best Practices Discussions

cancel
Showing results for 
Search instead for 
Did you mean: 

A New Paradigm for Large Scale LabVIEW Development

I've never used SV so I don't know what caused the instability however my "homebrewed" remote queue based approach has never given me any kind of problem provided you make sure to keep track of references and to delete them when you are finished with them.

0 Kudos
Message 21 of 30
(1,501 Views)

We enable buffering and use our shared variables for asynchronous communication (which I agree is nearly always the way to go).

For example, let's say we are controlling the motion of an axis.  The controller receives and processes messages in a loop (say, 50 or 100 Hz) that is at least as fast as an external system sends messages.  (This is if the controller runs on RT.  If it runs on Windows we use shared variable events.  We usually set the buffer size to 50, but we are OK as long as the buffer doesn't overfill.)  Say the controller receives a new setpoint.  The controller, based on the current state of the model, determines whether or not to start the move.  Let's say we start the move.  The move may take quite some time, but the controller keeps running.  The model is just in the Moving state.  So if a new setpoint arrives or a stop message, the controller just knows to apply the new setpoint or change to the stop state.  Communication is asynchronous and stop commands take effect on the next loop (or within some small number of loops if the buffer has some items in it).  Note that the system design should be such that the buffers don't overfill.  In practice we have found it is easy to process commands within a loop or two.  So for these types of applications queuing (at our application level!) is not necessary.

So... shared variables can support buffering and asynchronous communication, and I highly recommend using them asynchronously.

Yes, I generally avoid global variables as well (at least within a component).  I think of shared variables as middleware, and I focus on them as a publish-subscribe communications solution.  I use them to send data between stand-alone components.

Shared variables still aren't perfect, but since they moved to TCP/IP a couple years ago they have improved remarkably and are now mostly reliable, easy to use, and perform well.  They aren't the only solution but they are a good option, I think.

0 Kudos
Message 22 of 30
(1,501 Views)

While it may be tempting to implement something similar with TCP, UDP, VI Server, etc. (because of previou bugs with shared variables), the nice thing is that NI maintains / improves them and they offer a standard interface so future developers don't have to learn a custom implementation.  Independent buffers are maintainted for each subscriber.  Network drops / reconnects are handled by the NIVE (Variable Engine).  For other use cases, events, alarms, RTFIFOs, etc. are available... I encourage your decision to check 'em out


Certified LabVIEW Architect
TestScript: Free Python/LabVIEW Connector

One global to rule them all,
One double-click to find them,
One interface to bring them all
and in the panel bind them.
0 Kudos
Message 23 of 30
(1,501 Views)

I will look into it.

Remote queueing (see openg.org) uses the VI server which is TCP/IP based.  It has been well behaved for me since I started using it.

0 Kudos
Message 24 of 30
(1,501 Views)

Do you have a link for the openg.org remote queueing information? I can only find message queues on openg.org.

0 Kudos
Message 25 of 30
(1,501 Views)

I am not offended at all. On a second thought, you were right to name it like this if your plan was by throwing in the words "New Paradigm" into the header to attract more attention from people who still have no idea about the fundamental deficiencies of the dataflow, (at least in the form it is implemented in LabVIEW) and the benefits of this actually 40 years old or so "actors model", i.e. breaking an app into a bunch of modules, each of which kind of has its own "life" (has its own state machine, preferrably hierarchcal, and, maybe, even a separate thread or even a separate process which runs doesn't matter where, maybe even on another machine) and communicates with other such modules only asynchronously through some form of queues. Of course, the app must be broken into such modules in a way that minimizes the amount of communication between them (cohesion).

Looks like the discussion now is only about the particular implementations of the asynchronous communication. To see the larger picture though, one has to realize the fundamental limitations of LabVIEW. What exactly has driven us to resorting to these rather cumbersome design patterns? What is missing from LabVIEW block diagram? If you think about it, the pure dataflow in LabVIEW, the connected graph itself (nodes and wires), without the two rather alien and kind of "thrown on top of it" structures, namely, Case and Loops, is not functionally complete! That's why ANY state machine pattern includes both these structures.

The main two things missing are: 1. A "fork" node, i.e, you can't have a data token only on one of the outputs depending on the intput(s) value(s), only on all of them. 2. Implicit looping, i.e. you can't route data, say, from node (subVI) A to node B, then from B to C, and then from C to A. Now imagine we had these two things. Then each node/VI on the block diagram would represent an active object (which, of course, could, in turn, have its block diagram with other active objects on it!). The wires then would represent possible message/data links. Well, the inputs better be buffered (queues). Ever heard of Petri nets?

It would be even cooler if such graphical code could modify itself  during run-time, i.e. dynamically create and kill nodes. Like God you could build an "world", inhabit it with the initial population of "creatures" and start it wondering where it will end up after all

Look where MS is headed!

http://msdn.microsoft.com/en-us/library/bb964572.aspx

A LabVIEW killer (in several years?)

Well, before we have it as described above, I agree, we'd better use the active objects architecture anyways, although implemented not as gracefully. If anybody cares, LabHSM used regular queues within one app/exe and VI Server (TCP/IP) for communication between modules if they resided in different processes/machines, but the mechanism was hidden from the user of the library, of course, and the "Send Message" calls looked almost identical. This allowed to utilize it even without LabVIEW event structures with custom events which are rather a recent addition to LabVIEW as everybody knows. I mean custom events, not the event structure itself, which appeared earlier but initially assumed that "events" can be only something the user does on the front panel and not something that can be thrown by the code.

0 Kudos
Message 26 of 30
(1,501 Views)

I also use NSV's as remote queues and they have worked very well for me.  NSV's can be of any composite data type and are very fast.  You can block on a NSV read until new data has arrived so it is very efficient.

The senders return IP address and a notifier refnum can be sent along with the message so that you can create synchronous messaging.  One can use viserver for remote notification invocation (via proxy vi) but it is quite slow

(200ms) so I prefer to use a NSV as the reply channel and use a deamon to send out the notifications.

0 Kudos
Message 27 of 30
(1,501 Views)

"The downsides [of events] are presumably plentiful, but all I can bring to mind right now is the disadvantage of holding a lot of data in an event call."

I'm going to stick my neck out and say that user events are not a good way to transmit data (or anything, for that matter) to the daemons.  I've been discussing the problems of injecting user events into daemons with Damien here, so I'll just briefly list a few of my objections:

  1. It complicates the daemon code.  You now have to have write a queued message handler and event handler, and worry about the interactions between them.
  2. It complicates the api the daemon exposes to clients.  Some operations are performed by sending a message on the queue, others performed by firing a user event.
  3. It complicates the client code.  The client has to create and manage a queue AND a user event to send information to a single daemon.
  4. It limits the daemon's flexibility.  The daemon can only be connected to a single client at a time.  Sometimes this might be desirable, but there are lots of situations where a daemon could easily accomodate multiple clients.

Having the client use queues to communicate with the server also creates a lot of complexity that isn't needed.  More importantly, it isn't a very robust solution.  Imagine multiple clients connected to daemon.  They all send messages to the daemon through the same queue.  The problem is that since each client, who may have no knowledge of other clients, has access to the queue, it can manipulate the queue in unpredictable ways such as flushing it or dequeuing elements.  Obviously that would really mess up the other clients and possibly break the daemon as well.  By removing the client's ability to manipulate the queue, you've just eliminated an entire family of potential bugs.  (IMO the best bugs are those that can't occur.)

What's this silver bullet that simplifies daemon code, makes it easier for clients to use the daemon, allows for more flexibility, and prevents whole categories of bugs from popping up?  Encapsulation.  Rather than just dynamically launching a vi as Doug suggested, all the daemons should be converted to classes and made into active objects.  All active objects expose a Create method that creates the runtime references needed and launches the remote processing loop, and a Destroy method that cleans everything up.  Queues are still used to send information from the client to the daemon, but messages are exposed to the client via public methods.  For example, the daemon class has a public Abort method.  Inside this method the message queue is unbundled from the class wire, the queue is flushed, and an abort message is sent to the remote process.  The queue itself is never exposed to the client.  All commands sent to the daemon should be through simple VIs with inputs for the message data.

From the client side, using a simple daemon looks like just like any other normal Labview code.  Drop a Create vi, the command VIs you want the daemon to execute, and a Destroy vi.  Wire up the class terminals and run it.  The daemon spawns in a new thread and the client doesn't have to worry about any of the execution or communication framework needed to enable it.

I do use events in my active objects; however, they are used to notify clients of things that are happening in the AO, not to tell the AO what to do.  The AO exposes methods that returns the user events to the client, so it can register for those events it is interested in.  Events should be used to send information out of code modules, not in to code modules.

0 Kudos
Message 28 of 30
(1,501 Views)

I found that the following is most convenient (this is how I did it in EDQSM and LabHSM):

You are right. Other modules should not have access to the daemon's event queue. So, in my frameworks each active object (daemon) has a separate message queue. It exists in addition the the internal event queue and actions queue. Only the messages (events defined as "public") received initially into this message queue will eventually make it into the internal event queue of the AO. So, instead of using public methods for communication, I use public events.

0 Kudos
Message 29 of 30
(1,501 Views)

I haven't been on their site to look for it in a while.  I do know they now have a new tool for automatically downloading libraries that you may have to use.  I originally had to modify the Remote Queue library they provided to reflect the latest connector panels in LabVIEW because they originally wrote it for LabVIEW 5.x and I was working in 6.x at the time and changes had been made to the connector panels.

0 Kudos
Message 30 of 30
(1,501 Views)