LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Are clusters or individual elements better for shared variables?

Solved!
Go to solution

So...  I have some RT code that is being updated, and pulled out of the Stone Ages of LabVIEW.  It was originally written for an old FieldPoint controller operating in "headless" mode, and used the "publish" and datasocket methods for communications and external control.  I had to get clever way back then, and put together a parsing/unparsing system for strings to send sets of data back and forth between the controller and any HMI or other computer attached.

 

Now, I'm completely rewriting the code for a cRIO system, and doing my best to leverage all of the strengths of the latest LabVIEW versions.  I have already done an intermediate stage, where I converted from the publish/datasocket method to using network shared variables for my strings, so I could keep some of the original control and calculation logic.  Now, however, I'm going back to the drawing board for most of the program, with only some of the proven logic being held over into the new version.  And, as I'm putting together the data structures I need for both internal control and external communication, I'm in a bit of a quandary...

 

I have come upon a data structure dilemma:  should I use individual shared variables for my data, or assemble associated data into clusters?  My original program had a string (essentially a flattened cluster) for each sensor in use (up to 4), one for the system parameters and states, and one for the control parameters and states.  There was a certain advantage to keeping the data compartmentalized like that, it kept things organized and forced me to avoid too many random references of each data point.  And it kept the number of communications channels limited to just a handful.  Mimicking this structure with cluster shared variables would be easy.  But, I'm not sure it's the best or most network-efficient method.

 

I know the bundling/unbundling will add some processor time in my code, that is not new to me (it will still be much faster than my old parsing routines).  But, if I have individual data points being thrown around, I can access them easily from things like Data Dashboard (which is great, but far too limited to be able to grab items in clusters and such).  Having all of my data points individually available would make my project messier, but open up easier access.  It would also dramatically increase the number of data points being thrown around on the network at any one moment.  For reference, I would probably have a maximum of 100 data points at one time, made up of a combination of integers, floats, booleans, integer arrays, boolean arrays.  Or I would have a maximum of 8 clusters that would contain those data points.

 

Any suggestions on which way I should lean?  Are there any advantages/disadvantages between shared clusters like the ones I need vs. the number of individual shared variables I would need using the alternative methods?  Network traffic and efficiency are always a concern, particularly since this is a "headless" cRIO in a control situation that must maintain a fast scan rate...

 

Thanks for any help.  I'm so stuck on this fence, and I can't figure out which side to fall off!

Message 1 of 5
(3,391 Views)
Solution
Accepted by topic author Vrmithrax

Given the amount of data that you could potentially be working with, it would probably be best to bundle your data into clusters. As stated in the last paragraph of this document, it’s not recommended to use more than 25 network shared variables on cRIO. Having 100+ shared variable connections on the target would very likely cause performance issues, potentially even to the point of crashing the system.

Message 2 of 5
(3,340 Views)

Thanks Tim, that is a great source that I somehow missed in my hunt for information regarding my dilemna...

 

I have to wonder though, does that 25 number also include the I/O points on your cRIO?  Anyone know that particular?  Most of the I/O points are network shared by default during initial configuration, and you could very quickly exceed 25 variables on an 8 slot rack (such as the one I use, a 9074).  Now I'm a bit worried that I'm overusing the variable engine, even before the communications clusters get figured in...

0 Kudos
Message 3 of 5
(3,335 Views)

I/O variables are not actually handled by the Shared Variable Engine, but rather are published on a thread associated with the NI Scan Engine and therefore don't count toward the total number of shared variables.

Message 4 of 5
(3,295 Views)

Thanks again, Tim.  Definitely set me down the right path for this code rewrite!

0 Kudos
Message 5 of 5
(3,292 Views)