LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

High CPU usage drawing XY Graph

LabView 7.1.1 and 8.5.

 

My application requires that a XY Graph that allows the Operator to select which of many channels are plotted on the X and Y axis.  I have implemented a 10,000 point circular buffer that updates every 200ms.  The problem occurs after testing is complete and the buffer over time becomes filled with noise for each of the two selected channels.  The result is a tight blob of 10,000 points all connected with lines on the XY Graph.  Since the auto-scale is enabled, the size of the blob of data is maximized to fill the XY Graph.  In addition, the operator is in control of when the XY Plot is enabled/disabled.  The result of this scenario is excessive CPU usage causing the whole computer to be bogged down and nearly unresponsive to Operator selections.

 

The attached VI is a very simple (reduced) example of what I am experiencing.

 

Through some troubleshooting, I have learned the following.

The point style, interpolation, and line thickness all directly effect CPU usage.  A round point requires more CPU usage than does a square point.  A large dot takes more CPU usage than a small dot.  A thick line takes more CPU usage than a thin or no line.

The summary of what I observe is that the more pixels that are required to be represent the graph, the higher the CPU usage.

 

Does anyone have any ideas of how to further reduce the CPU usage either by other XY Graph settings or by some method of intelligently selecting which points to plot?

 

Thank you.

0 Kudos
Message 1 of 16
(6,034 Views)
Hi crcragun,

"
the more pixels that are required to be represent the graph, the higher the CPU usage" - that's true! Smiley Very Happy

So your options are:
- obvious: draw less points (with thinner lines)! Can you really distinguish 10k points on this plot??? Especially when they are located on just 5 x-values???
- also obvious: don't draw in each timeout case! Draw only when new points are added or plot options change...



Message Edited by GerdW on 06-26-2008 09:03 PM
Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
0 Kudos
Message 2 of 16
(6,019 Views)

To me it does not seem un reasonable to have 10k points on a graph. (the data in the example is only sample data not real data)  It seems that regardless of the number of points, If the graph is taking up a significant portion of the plot area you see the same results.  LV uses a lot of CPU to plot it.  If you have the same number of points follow a more continuous line over a larger range and there is a lot of empty space in the plot area, LV handles it without any trouble. 

Is there anyway to make this more efficient other than modifying the data and the range that data occupies?



Message Edited by StevenA on 06-26-2008 01:31 PM
SteveA
CLD

-------------------------------------
FPGA/RT/PDA/TP/DSC
-------------------------------------
0 Kudos
Message 3 of 16
(6,008 Views)

I understand that 10K points is a lot, however there are reasons for this.  I can and will look into reducing the number of plotted points.

I should also make one thing clear; any time valid data is plotted and there is actual useful data displayed on the XY Graph, the CPU usage is very low.  When useful data is displayed, the data points are spread out over the whole plot which is constantly auto-scaling.  The problem only seems to occur when many points are filling the plot causing and single large blob consisting of lines and points.

Thanks

0 Kudos
Message 4 of 16
(6,000 Views)
Can you identify that "blob" by some simple measure such as standard deviation or max - min? If so, you could suppress the plot and have a boolean indicator to show "Invalid data" or something.

Lynn
0 Kudos
Message 5 of 16
(5,992 Views)

It sonds like the issue is when you have over-lapping points.

Try using the property "defer front panel updates" to defer updates (T) before the data is presented to the graph and then undefer (F) after the update.

Ben

Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
0 Kudos
Message 6 of 16
(5,988 Views)
I have experienced high CPU usage on XY graphs in the past... I know there are ways to try and manipulate the data you put on the graph to make it less intensive on your cpu, but I would like to see NI work on making this more effecient.  I ran the posted example on my machine ( 3 GHz Dual Core Xeon with 3 GB ram and a 256MB Quadro FX 3500 video card) and it brought it to it's knees.  
SteveA
CLD

-------------------------------------
FPGA/RT/PDA/TP/DSC
-------------------------------------
0 Kudos
Message 7 of 16
(5,966 Views)
It may be an OS issue, at least in part, or a video driver. I ran the posted VI on my Dual G5 (PPC) Mac. The CPU usage does increase with Interpolation On, but the program remains responsive. I ran an heavy duty number cruncher simultaneously and did not notice any sluggishness in either VI. Other (non-LV) programs also have normal responsiveness while it runs. Both VIs are running while I type this. (It does not make my typing any better, though!)

Lynn
0 Kudos
Message 8 of 16
(5,946 Views)

Again, defer Front Panel updates before the data is presented to the graph.

Region A did not use defer FP updates.

Region B DID use defer FP updates

This is the code I used.

Ben



Message Edited by Ben on 06-27-2008 08:00 AM

Message Edited by Ben on 06-27-2008 08:00 AM
Retired Senior Automation Systems Architect with Data Science Automation LabVIEW Champion Knight of NI and Prepper LinkedIn Profile YouTube Channel
Download All
Message 9 of 16
(5,944 Views)

Ben,

The "Defer Front Panel Updates" is a great suggestion!  I have used this property, but never in a situation like this where it is called twice every 200ms.  Do you see a fundamental problem with calling this property with such frequency?  In addition, the VI that I posted represents the simplest example that captures that displays the problem.  My full application includes several other screens that are all open at the same time and include parallel running loops as well as communications with an RT.  The VI that includes this Graph also has many Event states driven from front panel buttons.  Will defering the front panel updates cause any issues with capturing any of these events?  I am sure that I can evaluate your suggestion in my full application based upon CPU usage alone, but is there anything else that I should also look at to ensure that the evaluation is comprehensive?

 

Lynn,

Since the various data channels provide a very wide range (~100dB) of signal levels, it would be tricky to detect at what point the graph is displaying meaningless data.

 

Steve

I agree that it would be best if XY Graph could simply (some how) deal-with this situation.  This would clearly rest at the feet of the LabVIEW developers.

Thanks for all of your feedback!!

0 Kudos
Message 10 of 16
(5,908 Views)