Showing results for 
Search instead for 
Did you mean: 

Slow data transfer using shared variables


We've built a reasonably large LV 8 project and are taking advantage of the shared variables to communicate between our host PC and 3 or 4 RT systems. We've noticed a large lag in the data from PC to RT and back to PC.

The CPU usages on the computers aren't at very high levels (they range from 30%-60%), and memory usage stays pretty low and stable.

This is how we notice the delay:

We have two arrays (read and write) which are shared variables hosted on one of the RT systems. The host PC displays indicators for those two arrays. The VI on the RT system takes the write array, does some processing, and then stuffs resulting data into the read array.

So on our host PC, we write something to the write shared variable - that takes a couple seconds to show up on the indicator. Another few seconds later, we see the results show up in the read array.

All arrays are initialized in the VIs, and the shared variables are FIFO. There are no front panel things on the RT system.

We have tried to eliminate all processing between the write and the read array (so it simply passes data through), and we still see delays. The few second delay happens on a good day. When things get bad, it might take us 10 seconds or more for the data that we wrote to the shared variable to show up.

Our RT loops need to be running at 1 kHz, and we wish to log that data. The application is time-critical, but the shared variables seem to transfer slowly. We've tried different network settings, and have seen some improvement, but not enough yet.

Has anyone experienced similar problems? I read in a post about shared variables that they are not recommended for high-speed applications, which is unfortunate. I'm still hanging onto them for now though, in hopes of finding a solution.

0 Kudos
Message 1 of 7
How big are the arrays and how many units of the array actually being changed at a time?

It might actually be quicker to couple the changed elements with an index and only send them, re-constructing the array at the other side.  If the data transfer is the problem, this may help.

Also,  How are the RT units connected? Ethernet, 100M, 1GB Ethernet?

Further, post a picture of your code so that we can see approximately what you're doing,  maybe someone will have a good idea what is going wrong.

Trying to help

Using LV 6.1 and 8.2.1 on W2k (SP4) and WXP (SP2)
0 Kudos
Message 2 of 7

I did not used shared variables yet, but before this functionality existed (and still now Smiley Wink), I simply created communication between both system using tcp or udp sockets. This is not so complicated in LabVIEW and would also allow having clients written in other languages and/or running on other OS.

In your case I would suggest TCP buffered transmission from RT to PC (allowing log of all data without loss) and single point transfer from PC to RT for variable update.

0 Kudos
Message 3 of 7


It is strongly recommended to not use networked shared variables (or anything else networking related) in the time critical loop.  When you have any shared resource in the time critical loop (TCL) and the TCL is accessing that resource, nothing else can get to it, including the shared variable engine to rebroadcast the data over the network. What is probably happening is that your TCL is "blocking" access to the shared variable while it is reading and writing to it, and preventing other threads from getting to in a timely matter. Also, adding a network shared variable to a TCL will effectively destroy its determinism because TCP/IP communications are inherently non-deterministic.

To fix this, if you are using the shared variable to store values between loop iterations, I would instead use a RT FIFO shared variable to bring data in and out of the TCL, and the in a separate non deterministic loop, write the data to a networked shared a variable. 

Hope this helps,

--Paul Mandeltort
Automotive and Industrial Communications Product Marketing
Message 4 of 7
Hi, just to answer some of the questions...

Some arrays contain 4000 elements, and many values change continuously because they're mostly measurements from our devices. The RT units are on their own 100M ethernet network connected by a switch.

Yes, the networked shared variables are non-deterministic, but they also just seem slow.

All that aside, we did some testing on the shared variable communication speed without all the fluff in our project. I attached a very simple project to demonstrate.

The screen shows the time it takes for the write and read values to become equal.

These are the variables that affect it:

  • Size of write array (first iteration doesn't count, so that it finds a place to sit in memory)

  • Wait time on read and write loops (see the CPU usage suddenly jump as you make these faster)

  • Single-process vs. networked shared variable (single-process made things fast)

  • FIFO shared variable (4000 elements) (made an improvement but still "slow")

And here are some observations:

  • The local variables aren't adding significant overhead. We did the same test with single-process shared variables instead since they seemed fast, but the time to equal was the same in either case.

  • We also tried initializing the write array at the start of the program, but the effect was the same.

  • The time to equal without having the networked shared variable array as FIFO would go anywhere between 60 ms - 4000+ ms. With FIFO, the time was between 60 ms - 200 ms.

  • We tried to replace elements in the array programmatically, just by stuffing the loop counter in there, to eliminate front panel issues, and that just made things worse. By manually changing values, we were giving it the time it needed to reflect that change.

  • We disconnected everything but the host PC and the one RT system from the switch. Haven't tried connecting them with just a crossover cable.

Some questions:

  • Is it the write or the read process that's taking time?

  • Apart from being non-deterministic, are the networked shared variables also just slow?

  • Why is there a somewhat exponential jump in the CPU usage if you change either one of the read or write loop wait times from, say, 100 ms to 10 ms?

  • Could some buffering scheme help us here? We've tried to add buffering to the shared variable but haven't noticed an improvement.

And the prize question, how do we get this to work?!

0 Kudos
Message 5 of 7
So I'll take a swing at one of the questions. The reason why you're CPU usage increases when you decrease your delay time in the loop is the loop will iterate faster. That is, it will execute more times per second. This requires more CPU time. If you completely remove the wait, you'll note that your CPU usage increases to 100% because the loop is going to iterate as fast as your computer will go.
Chris C
Applications Engineering
National Instruments
0 Kudos
Message 6 of 7

Yeah, I'm aware of that. I just didn't think there would be such a large jump - about an extra 60% when we changed the loop time from 100ms to 10ms. That pushes the CPU usage up close to 100%, which seemed like too much for just that test VI running. Knocking down the size of the array to about 1000 elements helped, as did making the shared variable FIFO single process. The network communication for a network-published shared variable eats up a good deal of both CPU usage and time. A couple of NI application engineers I've spoken to seem to think that reading and writing to a networked shared variable that fast (1ms) is possible, but some others don't and I'm leaning towards the latter 🙂 I think as Paul mentioned, we should have a second loop to transfer data across the network.

I'm interested to know how other people developed their applications for high-speed data acquisition and logging, particularly across a network.

0 Kudos
Message 7 of 7