An honest conversation about the usage of Global Variables in LabVIEW including why they are often called "evil", proper usage of them, and alternatives.
Nice presentation Tim.
Thank you for the mention when you spoke of Action Engines.
Another issue with Global variables is the impact on performance when we use many of the same Global. In the thread I linked above, I questioned why would one read ofa global be slower than another and I do not think I ever got a good answer. In that case I was using a global for a stop flag to stop about 250 TImed Loops and the Global use introduced "finished Late" situations.
The problem was solved by replacing all of the global reads with a simple Action Engine.
I am mentioning that situation here to let everyone know that there is a potential that multiple global reads can impact performnace.
Curious about the many readers. I will have to play around with that when I find a little time. My gut feeling is that your issue was resolved somewhere along the last 10 years. I am not saying you are wrong, just that it does not feel right.
I hope you are right Tim.
It was the the one time that I "gave globals a chance" and the rest is history.
If you are going to ry and recrate that situation, I was using hardware timed loops so the testing may not be valid without using the hardware Timed Loops.
Aside from this one point and my old song about "but those that follow you may not have read your presentation" I agree with all of the other points you made.
Thank you again for standing up and working to make LabVIEW great.
Just thinking out loud in general about global variables, and without regard to hardware timed loops. Remember that LV global variables are backed by a file. Won't every write and read operation of a global then result in a blocking OS call to write/read that file? This would make every global read highly non-deterministic, and so could explain why one read operation might be slower than another. Yes, the OS and disk itself are likely caching that data; but this is irrelevant because the OS call itself will still block, and will therefore still be very non-deterministic whether the data is retrieved from cache or a true disk read must take place.
Ben's Action Engine solved this for reads, because the global variable is now cached in a LV Shift Register, guaranteeing that no OS call will be needed to access the value. WRITING to the global, however, even if within the Action Engine, would still require an OS call.
Does this make sense, or am I just rambling nonsense?
Remember that LV global variables are backed by a file. Won't every write and read operation of a global then result in a blocking OS call to write/read that file?
I am sure the globals are loaded into memory, there is no file IO involved during the run.
Remember that LV global variables are backed by a file. Won't every write and read operation of a global then result in a blocking OS call to write/read that file? This would make every global read highly non-deterministic
As far as I know, the Global Variable is just a special type of VI. So with your logic, every VI is also writing to a file as it runs? I know absolutely for sure that is not the case. For more evidence, notice that you can set the Default Value for a Global. If it was actually backed by a file, that would not be there and your last written values would persist between application runs (ie when the global is taken out of memory).
But I will state that Globals are not inteded for determinism. But they are fast enough that it should not really matter in a normal Windows application.
Yes there is no disk I/O involved with access to a global (save mapping virtual memory blha blah blah).
To the best of my knowlege and going back as far as I have been involved, ALL VIs are loaded into memory before an application starts (excluding the dynamically loaded stuf of course). That included glogals.
Disk I/O only gets invovled when loading dynamtically, or mapping addition memory as data buffers.
Since posting my reminder of what I witnesed in LV 7.X I attempted to recreate the situation but I do not have a hardware timing source so I could not duplicate the problem we saw back when we only had a single (or were we up to two CPUs at that time?) ruinning under (maybe) Window NT with a bunch of timed loops.
The engineer I was working with at the time (he is now a senior architecht) still remebers the situation were we were attempting integrate the large number of parallel loops and were getting the "Finished Late" errors. He still does not trust them.
Once burned twice ....
One other item that makes an AE a viable selection for something like a Stop Flag (goes true when time to stop).
If the AE is set as sub-routine we get an option for each instance of the AE called "Skip If Busy".
Prior to the Event Structure bieing introduced (and not counting the IMAQ Image dispaly) it was the one example of a violation of the data flow paradigm.
If that option was selected AND the AE was busy, LV would not wait on the AE but would proceed past the AE call with the returned values be the default fo the data types coming out of the AE. In the earliest days of RT this was the BEST and only method to pass info into a deterministic loop without impacting determinism.
The Skip If Busy option was good if you wanted to learn about a value change but did not need to ensure it was recieved imediately. I used them to pass the throttle posision and other setting into a jet air craft engine simulator back in LV 6.1 and that same code is still running today but under a more modern version of LV.
So "Skip if Busy" is a way that LV can "cheat" the data flow paradigm.