LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Why are charts and graphs so difficult?

Highlighted

@Bob_Schor wrote:
While this is certainly a clever idea, it is a "trick" that works for a peculiar reason.  Recall that the name of this Indicator is Waveform Chart.  A Waveform has three components:  t0, the starting X Value, dt, the constant spacing of all the unspecified, but computable X values, and Y, an array of all the Y Values.  This "trick" lets you specify X (as t0), ignore dt (since you have only a single X), and replace the Y array with a single-element Array holding the single value of Y.  I suppose you do get scrolling and other "Chart" attributes (I've never actually tried using this trick, though I applaud its cleverness).

 


Obviously, it "works", but probably just because some clever programmers added a lot of duct tape behind the scene. 😉 While I always knew about it, it's not something I would ever be willing to use.

 

A waveform is defined by x0, dx, and an array of [Y] and here each point is it's own waveform. So if the graph contains multiple waveforms, why are they all contracted into a single plot and optionally connected by lines? Even the data structures are wasteful, because each point requires at least 28 bytes (8 bytes for x0, 8 bytes for dx, 8 bytes for Y and 4 bytes defining the size of each Y array, plus more). In comparison, if you maintain the xy graph in a complex array, we only need 16 bytes per point (or less if SGL is an option).

 

Note that the "built xy graph" express VI (Another thing I am not willing to use, though ;)) has an option to retain data between calls, basically turning it into a glorified xy chart. It even has "enable" and "reset" inputs. (Be very careful not to run out of memory, we cannot set the history size because nobody cares! :()


LabVIEW Champion. It all comes together in GCentral GCentral
What does "Engineering Redefined" mean??
0 Kudos
Message 11 of 27
(236 Views)
Highlighted

@crossrulz wrote:

wiebe@CARYA wrote:

A restriction is X has to increase, or nothing will show?

Another restriction is all channels have to have the same number of waveforms added. 


Yep, all hell breaks loose when X goes backwards.

 

As far as the number of waveforms added, that only makes sense since you should only be writing to a chart from the terminal.  This means you should have an array of waveforms already; a waveform for each channel.  So I don't really consider this a restriction.


I mend XY Graphs don't have that restriction.

0 Kudos
Message 12 of 27
(225 Views)
Highlighted

@altenbach wrote:

@Bob_Schor wrote:
While this is certainly a clever idea, it is a "trick" that works for a peculiar reason.  Recall that the name of this Indicator is Waveform Chart.  A Waveform has three components:  t0, the starting X Value, dt, the constant spacing of all the unspecified, but computable X values, and Y, an array of all the Y Values.  This "trick" lets you specify X (as t0), ignore dt (since you have only a single X), and replace the Y array with a single-element Array holding the single value of Y.  I suppose you do get scrolling and other "Chart" attributes (I've never actually tried using this trick, though I applaud its cleverness).

 


Obviously, it "works", but probably just because some clever programmers added a lot of duct tape behind the scene. 😉 While I always knew about it, it's not something I would ever be willing to use.

 

A waveform is defined by x0, dx, and an array of [Y] and here each point is it's own waveform. So if the graph contains multiple waveforms, why are they all contracted into a single plot and optionally connected by lines? Even the data structures are wasteful, because each point requires at least 28 bytes (8 bytes for x0, 8 bytes for dx, 8 bytes for Y and 4 bytes defining the size of each Y array, plus more). In comparison, if you maintain the xy graph in a complex array, we only need 16 bytes per point (or less if SGL is an option).

 

Note that the "built xy graph" express VI (Another thing I am not willing to use, though ;)) has an option to retain data between calls, basically turning it into a glorified xy chart. It even has "enable" and "reset" inputs. (Be very careful not to run out of memory, we cannot set the history size because nobody cares! :()


I was worried about this too. Not sure if I am interpreting correctly, but after I make a copy of waveform chart and change it to control and try to extract the data, I only get the last value that I added, all the other single point waveforms appear inaccessible. Will need to run this for some time to see if memory increases or not.

 

mcduff

0 Kudos
Message 13 of 27
(209 Views)
Highlighted

@mcduff wrote:
I only get the last value that I added, all the other single point waveforms appear inaccessible. Will need to run this for some time to see if memory increases or not.

What do you get reading the "history data" property node?


LabVIEW Champion. It all comes together in GCentral GCentral
What does "Engineering Redefined" mean??
Message 14 of 27
(204 Views)
Highlighted

@altenbach wrote:

@mcduff wrote:
I only get the last value that I added, all the other single point waveforms appear inaccessible. Will need to run this for some time to see if memory increases or not.

What do you get reading the "history data" property node?


That's where the data is! Thanks.

 

That being said, a thousand to ten thousand history points although inefficient probably won't make too much of a difference on a modern computer.

 

mcduff

0 Kudos
Message 15 of 27
(198 Views)
Highlighted

Hi mcduff,

 


@mcduff wrote:

That being said, a thousand to ten thousand history points although inefficient probably won't make too much of a difference on a modern computer.


Well, the memory requirement probably isn't a problem, but LabVIEW charts/graphs are rather slow when it comes to paint a lot of points in the plots!

Ever tried to plot a large (aka a lot of points) plot containing noise on a graph set to autoscale? (Especially when antialiasing is enabled or thicker lines are used?)

Best regards,
GerdW

using LV2011SP1 + LV2017 (+LV2020 sometimes) on Win10+cRIO
0 Kudos
Message 16 of 27
(196 Views)
Highlighted

@GerdW wrote:

Hi mcduff,

 


@mcduff wrote:

That being said, a thousand to ten thousand history points although inefficient probably won't make too much of a difference on a modern computer.


Well, the memory requirement probably isn't a problem, but LabVIEW charts/graphs are rather slow when it comes to paint a lot of points in the plots!

Ever tried to plot a large (aka a lot of points) plot containing noise on a graph set to autoscale? (Especially when antialiasing is enabled or thicker lines are used?)


Yes that is why I always do a min/max decimation of my data before displaying it. See this https://forums.ni.com/t5/LabVIEW/Re-Rube-Goldberg-Code/m-p/3771315/highlight/true#M1062724

for an example.

 

But 10k or less, really doesn't matter too much, only when 100k - millions of points.

 

mcduff

0 Kudos
Message 17 of 27
(188 Views)
Highlighted

Hi mcduff,

 


@mcduff wrote:

But 10k or less, really doesn't matter too much, only when 100k - millions of points.


I had problems with 9600 points/plot with noisy data - then I also included a decimation step…

Best regards,
GerdW

using LV2011SP1 + LV2017 (+LV2020 sometimes) on Win10+cRIO
0 Kudos
Message 18 of 27
(180 Views)
Highlighted

@GerdW wrote:

Hi mcduff,

 


@mcduff wrote:

But 10k or less, really doesn't matter too much, only when 100k - millions of points.


I had problems with 9600 points/plot with noisy data - then I also included a decimation step…


I found the sweet spot around 70MBi of plot data. More than that, it gets sluggish. haven't tried recently, but 70 MBi was consistent over a long period.

 

I usually use min\max decimation, but I turn each min max block into 4 points, the first value, the min, the max and the last value. When data has large gabs, this ensures that the line between the parts is right. Not sure if that's standard. Just min max works OK, but isn't 100% accurate.

 

I still don't get why we have to do the decimation. If it was build in, nobody would have to. I don't see a downside.

 

It wouldn't be so terrible if there was o much to consider. Like data updates, scale updates, user interaction, etc. Not to mention that the user can change to line styles that really don't make any sense with decimated data (like points). This again escalates a simple problem to lots and lots of work. We can't provide the default plot legend, we have to make our own... I fairness, I usually do that anyway, because I don't want my users to set interpolation and other nonsense.

0 Kudos
Message 19 of 27
(101 Views)
Highlighted

It is actually built in, see https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019YLKSA2&l=en-US

 

The problem then becomes memory; an indicator has at least 2 copies of the data, for large datasets it becomes prohibitive.

 

mcduff

Message 20 of 27
(83 Views)