LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Why are charts and graphs so difficult?

Highlighted

@mcduff wrote:

It is actually built in, see https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019YLKSA2&l=en-US

 

The problem then becomes memory; an indicator has at least 2 copies of the data, for large datasets it becomes prohibitive.

 

mcduff


That is disturbing.

 

Why does it perform so crappy?

 

When I add (a second) min\max decimation, I increase the memory issue even more. But the graph performs much better, without any negative side effect (the data looks exactly the same).

 

I do my decimation each update for all my undecimated data, but it is still orders of magnitude faster that feeding it to the graph.

 

Somebody screwed up, AFAIC...

0 Kudos
Message 21 of 27
(181 Views)
Highlighted

wiebe@CARYA wrote:
I do my decimation each update for all my undecimated data, but it is still orders of magnitude faster that feeding it to the graph.

A graph is much more than data. It has to map everything into the drawing area, can't decimate because it needs to allow unlimited zooming an panning, has to draw lines between all points (more expensive for thicker lines, even more expensive if anti-aliased), add grid lines, autoscale axes, layer all plots in the right z order, and all needs to happen in the UI thread (single threaded, shared by the entire UI) to copy the data from the transfer buffer. because user interaction (zooming, etc) is completely asynchronous.

 

Sure, the graph could do it's own decimation, but again, the UI thread is taxed. 

 

In some limited scenarios, one could replace with a simple intensity graph, sized at 1pixel/screen pixel and map all data into it via code. Here is an old example that can incrementally draw billions of lines on top of each other without ever running out of memory. 😉

 

 

 

 

Not sure what NXG is doing. and if it can utilize GPU resources better for this.


LabVIEW Champion. It all comes together in GCentral GCentral
What does "Engineering Redefined" mean??
0 Kudos
Message 22 of 27
(166 Views)
Highlighted

@altenbach wrote:

wiebe@CARYA wrote:
I do my decimation each update for all my undecimated data, but it is still orders of magnitude faster that feeding it to the graph.

A graph is much more than data. It has to map everything into the drawing area, can't decimate because it needs to allow unlimited zooming an panning, has to draw lines between all points (more expensive for thicker lines, even more expensive if anti-aliased), add grid lines, autoscale axes, layer all plots in the right z order, and all needs to happen in the UI thread (single threaded, shared by the entire UI) to copy the data from the transfer buffer. because user interaction (zooming, etc) is completely asynchronous.

 

Sure, the graph could do it's own decimation, but again, the UI thread is taxed. 

 

In some limited scenarios, one could replace with a simple intensity graph, sized at 1pixel/screen pixel and map all data into it via code. Here is an old example that can incrementally draw billions of lines on top of each other without ever running out of memory. 😉

 

 

 

 

Not sure what NXG is doing. and if it can utilize GPU resources better for this.


Now I'm totally confused.

 

Adding my own decimation makes graphs fast.

I suggested that graphs need build in decimation.

Turns out graphs have decimation.

They are still slow.

 

Although it is a lot of work, it is possible, even for me (given my limited access to events etc.) to make graphs efficient with decimation.

 

This includes: draw lines between all points, grid lines, auto scale axes, layer all plots in the right z order. I haven't considered thicker lines, or anti-aliased, but it shouldn't matter if the data points decimated, not the pixels that are drawn.  

 

If I can make graphs efficient with decimation, NI should be able to do it.

 

 

0 Kudos
Message 23 of 27
(161 Views)
Highlighted

wiebe@CARYA wrote:
If I can make graphs efficient with decimation, NI should be able to do it.

But you have more information on the decimation needed. LabVIEW must assume that you later want to zoom into an offscreen area or into a much smaller area where all points (i.e. pre-decimation) need to be shown. The UI thread is shared with many things and can only use one core. Have you tried setting your decimation routine to run in the UI thread?

 

I should stop talking, because I really don't know how things are implemented in detail ;)) I am sure the underlying code is ancient, has some cobwebs, and could benefit from 30 years of insight how to do things better. 🙂


LabVIEW Champion. It all comes together in GCentral GCentral
What does "Engineering Redefined" mean??
0 Kudos
Message 24 of 27
(153 Views)
Highlighted

@altenbach wrote:

wiebe@CARYA wrote:
If I can make graphs efficient with decimation, NI should be able to do it.

But you have more information on the decimation needed. LabVIEW must assume that you later want to zoom into an offscreen area or into a much smaller area where all points (i.e. pre-decimation) need to be shown. The UI thread is shared with many things and can only use one core. Have you tried setting your decimation routine to run in the UI thread?

 

I should stop talking, because I really don't know how things are implemented in detail ;)) I am sure the underlying code is ancient, has some cobwebs, and could benefit from 30 years of insight how to do things better. 🙂


Just guessing here; this is @rolfk territory.

 

LabVIEW has some distinct rules about data copies in controls and indicators, normally there are at least two copies of the data. How it accesses this data to do the decimation is unknown, is the decimation done every time the UI is updated is unknown (for example I update a digital indicator, is the plot also redrawn/refreshed/decimated), etc. Personally, I keep one copy of the original data buffer and decimate it as needed, for example, if someone zooms in/out, I'll re-decimate the data so it looks correct. Besides performance, I do this for memory considerations as I do not want multiple copies of millions of points.

 

mcduff

0 Kudos
Message 25 of 27
(127 Views)
Highlighted

@altenbach wrote:

wiebe@CARYA wrote:
If I can make graphs efficient with decimation, NI should be able to do it.

But you have more information on the decimation needed. LabVIEW must assume that you later want to zoom into an offscreen area or into a much smaller area where all points (i.e. pre-decimation) need to be shown. The UI thread is shared with many things and can only use one core. Have you tried setting your decimation routine to run in the UI thread?


I have all the data, and when zooming in, I can re-decimate with the original data and show the zoomed in data. The graph can do the same.

 

That will take time, but not as much as it takes now. I know, because when I decimate, it takes far less time to draw. The graph could do exactly the same, as it has exactly the same data as I have, the undecimated data.

 

But you're right. It's easy to talk about how it could be from the side line...

0 Kudos
Message 26 of 27
(112 Views)
Highlighted

I've thought about making a "better graph" QControl that incorporated automatic decimation, some advanced formatting, etc. It could even integrate with the Advanced Plotting Toolkit (which might be worth a look anyway, I think it uses a different plotting library... matplotlib, maybe?)

0 Kudos
Message 27 of 27
(104 Views)