06-26-2008 12:50 PM
LabView 7.1.1 and 8.5.
My application requires that a XY Graph that allows the Operator to select which of many channels are plotted on the X and Y axis. I have implemented a 10,000 point circular buffer that updates every 200ms. The problem occurs after testing is complete and the buffer over time becomes filled with noise for each of the two selected channels. The result is a tight blob of 10,000 points all connected with lines on the XY Graph. Since the auto-scale is enabled, the size of the blob of data is maximized to fill the XY Graph. In addition, the operator is in control of when the XY Plot is enabled/disabled. The result of this scenario is excessive CPU usage causing the whole computer to be bogged down and nearly unresponsive to Operator selections.
The attached VI is a very simple (reduced) example of what I am experiencing.
Through some troubleshooting, I have learned the following.
The point style, interpolation, and line thickness all directly effect CPU usage. A round point requires more CPU usage than does a square point. A large dot takes more CPU usage than a small dot. A thick line takes more CPU usage than a thin or no line.
The summary of what I observe is that the more pixels that are required to be represent the graph, the higher the CPU usage.
Does anyone have any ideas of how to further reduce the CPU usage either by other XY Graph settings or by some method of intelligently selecting which points to plot?
Thank you.
06-26-2008 01:59 PM - edited 06-26-2008 02:03 PM
06-26-2008 02:29 PM - edited 06-26-2008 02:31 PM
To me it does not seem un reasonable to have 10k points on a graph. (the data in the example is only sample data not real data) It seems that regardless of the number of points, If the graph is taking up a significant portion of the plot area you see the same results. LV uses a lot of CPU to plot it. If you have the same number of points follow a more continuous line over a larger range and there is a lot of empty space in the plot area, LV handles it without any trouble.
Is there anyway to make this more efficient other than modifying the data and the range that data occupies?
06-26-2008 02:47 PM
I understand that 10K points is a lot, however there are reasons for this. I can and will look into reducing the number of plotted points.
I should also make one thing clear; any time valid data is plotted and there is actual useful data displayed on the XY Graph, the CPU usage is very low. When useful data is displayed, the data points are spread out over the whole plot which is constantly auto-scaling. The problem only seems to occur when many points are filling the plot causing and single large blob consisting of lines and points.
Thanks
06-26-2008 02:59 PM
06-26-2008 03:06 PM
It sonds like the issue is when you have over-lapping points.
Try using the property "defer front panel updates" to defer updates (T) before the data is presented to the graph and then undefer (F) after the update.
Ben
06-26-2008 04:58 PM
06-27-2008 07:48 AM
06-27-2008 07:59 AM - edited 06-27-2008 08:00 AM
Again, defer Front Panel updates before the data is presented to the graph.
Region A did not use defer FP updates.
Region B DID use defer FP updates
This is the code I used.
Ben
06-27-2008 11:23 AM
Ben,
The "Defer Front Panel Updates" is a great suggestion! I have used this property, but never in a situation like this where it is called twice every 200ms. Do you see a fundamental problem with calling this property with such frequency? In addition, the VI that I posted represents the simplest example that captures that displays the problem. My full application includes several other screens that are all open at the same time and include parallel running loops as well as communications with an RT. The VI that includes this Graph also has many Event states driven from front panel buttons. Will defering the front panel updates cause any issues with capturing any of these events? I am sure that I can evaluate your suggestion in my full application based upon CPU usage alone, but is there anything else that I should also look at to ensure that the evaluation is comprehensive?
Lynn,
Since the various data channels provide a very wide range (~100dB) of signal levels, it would be tricky to detect at what point the graph is displaying meaningless data.
Steve
I agree that it would be best if XY Graph could simply (some how) deal-with this situation. This would clearly rest at the feet of the LabVIEW developers.
Thanks for all of your feedback!!