01-02-2020 10:31 AM - edited 01-02-2020 10:34 AM
I am using a shared variable, hosted on a cRIO-9068, to receive local and remote commands from UI applications on touchpanels and laptops, respectively, to control various operational aspects of a real-time application run by the controller. Traditionally, I receive the command, process it, then clear the shared variable to it's "Do Nothing" state again so it can resume waiting for additional commands. This process thread runs in a 250ms loop on the controller, which has been practical given the use case of the application not having to worry about concurrent / multiple bursts of commands being sent over at once.
Recently, the requirements changed such that there could likely be multiple hosts issuing commands to the same controller concurrently, so all I've done up to this point is reduce the Wait on the loop, from 250ms down to 10ms. The issue I'm now having is that single commands -- one command sent by one host -- is being processed anywhere from 2 to 3 times. I feel like the only explanation for this is that there is a delay in the shared variable actually taking the "Do Nothing" write for some amount of time after the loop has managed to iterate 2 or 3 times. In my mind, this seems to warrant using the "buffering" feature of the shared variable, also due to the fact that I could theoretically now receive multiple commands at once and that clearing the shared variable to a "Do Nothing" state could potentially also overwrite any additional commands issued by another host.
My question is: is buffering the most practical approach in this case, or are there other best practice techiques that would be advisable aside from using buffering? Attached is an image of the code snippet at the core of my application. I did not include the VI/project files due to the large amount of dependencies.
Development machine: Win7 SP1 using LabVIEW 2015 SP1
Thanks
Solved! Go to Solution.
01-02-2020 10:39 AM
I have a couple thoughts.
One is that the network shared variable might not be the best method for you to use. I've used them for years in a couple applications and have had no problems (but single value, not buffered). Others on the forum will swear you should never use them. Other methods using TCP/IP or network streams would be recommended.
The other is whatever method you use, add another item to the cluster such as a serial number. Every time you send a new command, increment that number that gets included in the message. If you happen to read a message that has the same number as one you've read before, then you know it is the same message and just ignore it.
01-02-2020 03:22 PM
I am one of those that avoid the SVs.
You may get away with using them for simple stuff and when a requirement changes Bingo you find out you can not trust them to actually update, loose data etc.
Go with a TCP/IP connection and when a command is received (any one message will only be received once) stick the command in a proper queue. It scales well and can be trusted.
Just my 2 cents,
Ben
01-02-2020 05:23 PM - edited 01-02-2020 05:27 PM
I like the idea of using tokens as you mention, and have utilized them before. The problem in this particular instance is that I need the cRIO to accept commands from a "captain" host, if you will, who has assumed control, in certain situations. When the cRIO comes back online after a power cycle or other event, there are some situations the hosts will autonomously try to instruct it what to do next. So it doesn't repeat instructions, I assign a master. That might be problematic with a token system for this instance. I'm of course not too concerned with the cRIO accepting/rejecting the right host, but rather that if my data is lossy or gets cleared a la in the code above, it might discard the master's command (which may arrive after the non-master's command) due to the nondeterministic nature of an unbuffered object.
Does the buffer sound like an appropriate approach to this type of scenario? It seems like perhaps that's what they had in mind when creating them. I agree on the TCP/IP advice; I just unfortunately have a lot of code already built around the SV object.
01-02-2020 06:43 PM - edited 01-02-2020 06:43 PM
@dest2ko wrote:
... I agree on the TCP/IP advice; I just unfortunately have a lot of code already built around the SV object.
Well it may help you but others that read this thread may see why it worth starting out with an approach that can be adapted without having to reinvent the wheel.
Take care,
Ben
01-02-2020 06:53 PM
I am also of the opinion that Network Published Shared Variables are evil. They have given me so many headaches including extremely weird race conditions.
I'm with Ben: use a TCP connection to do your remote commands and pass the updates via a queue to your main loop. You can even dynamically launch the TCP loop to have multiple running in parallel and you can handle all commands from all hosts.
01-24-2020 01:05 PM - edited 01-24-2020 01:06 PM
Sorry for such a long delay in follow-up. I ended up using the buffered network shared variable and sized it according to how many possible concurrent writes might be generated to this command variable. It is working well for the time being (race conditions resolved by buffering + slowing down the dequeue loop), but I would encourage to consider what the others have suggested in this thread: whether or not shared variables are the right choice and/or are scalable to the potential requirements of your architecture down the road. Pretty wise advice. Thanks for the help.
01-24-2020 01:12 PM
Even NI recommends using network streams or low-level TCP for your application:
http://zone.ni.com/reference/en-XX/help/371361R-01/lvconcepts/choosing_lv_comm/
Shared variables really should only be used for transferring the latest values only for slow, user-side monitoring of things, not critical "everything must go through once and only once".
01-24-2020 01:36 PM
We actually use a system of network queues with a publish/subscribe. Not code that I can distribute but it is a very flexible and scaleable system. In addition, it allows for broadcasting messages to multiple listeners. The system has the ability to buffer messages if a listener is not currently connected or throw them in the bit bucket.
I will also add that I am in the camp of not using network shared variables.