LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Techniques for passing shared data between client QDMHs

Hello all,

I am currently refactoring a large application (i.e. cleaning up my own mess) and have a question about architectures for sharing data between parallel processes. In our application, I have identified 7 different parallel processes that are required: UI/server (acting as the master), camera, DAQ,  motor, processing, display, and saving (all acting as slaves). I have modelled the application after the “Continuous Measurement and logging” template and am using queue driven message handlers (QDMH) for each of the parallel processes. When the UI launches, it creates the QDMHs and passes them to each of these processes running as a subvi within the main application.

The issue I am having is which design to use to share lots of variables (>50, no arrays) that are used in each of these parallel processes. By share, I mean one process is the “owner” of the variable but the other processes derive data from that variable. For instance, the number of camera pixels used will affect the camera loop and is owned by that process, but this will also affect the display loop (how big is the axis) and the processing loop (defining array sizes). Our application has a number of different “states” in which some of these variables change (e.g. going from an acquisition mode to a post processing, and then in post processing loading a previous acquisition with completely different parameters) but not necessarily all of them.

At this point, I can see a few different architectures, each with their own positives and negatives:

  1. The server/UI loop has a “master” cluster of data. When a parameter is changed in one of the other processes it broadcasts this update to the server/UI which updates it’s master copy and then broadcasts that cluster back to the slaves.
    • Benefits: This should avoid race conditions. Because only one slave process “owns” the variable it should be the only one to set it (although I haven’t thought of how to enforce that). This process is simple because we simply dump data back and forth via variants between the master and the slave loops.
    • Negatives: We are continually moving data that doesn’t need to be moved. This seems inefficient but I have no sense if this will affect performance without testing. This data is never used in the UI/server loop so giving it access seems unnecessary. Additionally, sharing the entire cluster with processes that don’t require it seems like a bad idea from an encapsulation perspective.
  2. The owner of data that is shared must push the data to the other parallel processes when something changes.
    • Benefits: Only the processes that require a particular variable have access to it.
    • Negatives: This method seems like it loses the benefits of code reuse. For instance, if the DAQ process updated 5 variables, two of which are required by camera and 4 are required by motor it looks like I am writing sections of code to handle the send/receive for each permutation of a change. Compared with the “broadcasting” method above this seems like a lot of bookkeeping.
  3. Creating an action engine/Functional global variable containing a cluster of data. When transitioning to different states, I can change some of these variables with a set, and when they a get is called later by the other processes (will require some synchronization to avoid race conditions) the variables will be updated.
    • Benefits: This seems like it reduces a lot of communication overhead between the loops and avoids creating copies of what could be a “large” cluster (there will be no arrays in the cluster, just parameters).
    • Negatives: I am somewhat worried about the blocking calls but I have no data to make me think that this could cause performance issues. Additionally, I can see there being race conditions. For instance, if the camera and DAQ loop both pull out the cluster of parameters as we change states, update only their parameters and then set the cluster back into the FGV the last one done (say DAQ) will have updated the FGV with the new DAQ variables but overwritten with the old camera variables.


Are there any other ways of communicating the data that I am missing? Are there any positives/negatives for the three architectures I have listed that I have missed? Does anyone have any preferences from their own experiences?

Thanks for the help!

0 Kudos
Message 1 of 2
(2,186 Views)

Since you are dealing with a large cluster, I would likely go with the FGV.  Inside of the FGV you just update the parameter(s) you need to.  This will avoid some of the race conditions since you should be protecting the critical path.  You will need an input for each data type the cluster contains plus an enum to tell the FGV which parameter to update (or just read the cluster value).  As long as you keep the FGV very simple, you won't have to worry about the blocking.  It will be done quick enough.


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 2 of 2
(2,171 Views)