I have a total of (13) VI's with graphics running on a crio 9037 with global variable sharing between vi's. The VI's are pop-up windows. The remote panel connection to any one of those VI's disconnects because the client can't keep up with the server. I know there are other posts on this subject but they are old enough that I question whether the CPU speed of PC's hasn't changed enough to make their point moot. I'm hoping there is an easy solution but if the scale of my program is the issue, I wonder if I should switch to network shared variables from the globals and just have a separate program running on a PC. Thoughts?
How are you viewing these VI's?
Are you running the VIs under the RT target directly from your PC? Or do you have a screen connected to the cRIO with the embedded UI option? Viewing VIs remotely running on a target is quite resource intensive, so it's generally not recommended, as anything but an early-stage debugging option.
Generally, it's highly recommended that any UI should reside on a host PC (unless you're using the embedded UI, which comes with its own bucket of caveats), and send data back and forth using NPSV (Network published Shared Variables) or Network Streams.
A Good place to start is the "LabVIEW FPGA Control on CompactRIO" Template project.
It may also be an idea to keeping an eye on the resource usage of your system, Monitoring CPU and Memory Usage on Real-Time Embedded Targets - NI
On a side note, an application that runs 14 different VIs which share data using global variables doesn't sound like the most solid and stable approach. If you give us some more insight I'm sure that we can help you refractor a bit and suggest a better approach.
Thank you for your response. We are using the embedded UI connected to a touchscreen locally. I'm not sure why we went with multiple VI's. I think the network variables are the way to go.
We have recently worked on a system with a remote UI application connected to a cRIO running the embedded control, and used NPSVs to publish the value to the remote UI. Only one UI application, but plenty of variables being trended on graphs. I seem to remember that there is a recommended maximum number of NPSV, which we were close to (even with clustering some variables together). The Remote UI was connected to the cRIO via a 4G connection, and was consuming about 600kb/s when we first tried it - when we hadn't initially built it to be streamlined in terms of data bandwidth. Improving the message handling got it down to about 120kb/s - which was about right for the data being passed. The client wanted to have potential to use over a UHF radio link - with a 20kb/s limit, and we managed that by optimising the sample rates, moving to singles, etc etc, and still got a usable Remote UI at 20kb/s.
The point of my rambling message is that we were surprised how efficient (in terms of data transfer) NPSVs are and there was a lot of stuff we could do to optimise and tune the remote UI behaviour. There is a lot of stuff that is hidden if you just use NPSVs, but good to understand if you are trying to push performance - mostly it is about maximising data transfer rates, whereas for us it was minimising.