LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Waveform Graph X-Axis - trying to show 5min, 10min, or all data

Hello,

 

I've attached a super simplified version of my producer/consumer DAQmx logging code. I'm using an event structure & property nodes to change the x-axis to suit my needs.

 

My goals is to perform the following:

Alter x-axis to show

  • 5 minutes of live data (with continuous update of data)
  • 1hr of live data (with continuous update of data)
  • All live data (essentially every bit of data i've recorded + continuous update of data)
  • Autoscale

 

So far the only part of this i'm able to get to work is the autoscale. What do I need to feed the property node in order for it to show the correct/live time + update?

 

Thanks!

Download All
0 Kudos
Message 1 of 8
(4,899 Views)

When you are building waveform graph, you need to specify dt = 1/44100, then you can set graph range in seconds (300 = 5 mins).

Is there a reason why you invert graph axis (xscale.maximum = 0, minimum = 13 M)?

One more thing... 

5 mins of double at 44 kHz = 13 M points, x 8 bytes per double point, 100 MB. Or 2 GB for 1 hour of data. LabVIEW and everyone else will die trying to show them - it will be very irresponsive and thoughtfull interface. And on the screen you can see only if there are 2000 points.

If you need all the data, stream them to disc immediately and decimate (10000-mate) what you are going to show.

0 Kudos
Message 2 of 8
(4,882 Views)

Ah that was my mistake... sample rate should actually be 1000 samples/sec, which should make labview significantly more responsive when looking back at data. I updated the VI with the correct sample rate & un-invereted the graph, making max = 300 & min = 0. However, it still doesn't seem to work.

 

Just to reiterate, my goal is that I can when I click the 5min button, I will see a continuously updating graph showing the newest 5 minutes of data, same goes to the 1hr & all data. Also, note that this is a simplified version of my normal code, which is a producer/consumer loop that saves data to tdms.

 

What are your thoughts?

0 Kudos
Message 3 of 8
(4,847 Views)

Sadly, I don't remember where I read this, but there was a Blog post that I think was referenced in the Forum a few years back that discussed doing exactly this.

 

Graphs plot "static" quantities, and show a finite number of points.  Let's say you are sampling at 1KHz, want to display 1000 points (because you have a 1080 x 1920 monitor, so you'll use 1000 of the 1920 as "display points"), and want to show the last 5 minutes of data.  5 min * 60 sec/min * 1000 points/sec = 300,000 points, so you need to display either every 300th point or create a 1000-point "average" by averaging every 300 points into a single point.

What if you want an hour?  Well, average 4800 points together.

 

So here is where the "magic" comes in.  Build two 1000-point fixed length Queues for displays, and two "summing" Shift Registers.  As data come in, add them to the summing Shift Registers.  When the "5-minute" register gets 300 points, divide by 300, put this on the 5-Minute Queue, and clear the sum.  Do the same with the "1-hour" register when it hits 4800 points.

 

Let's say you want to see the last 5 minutes.  Start by graphing the contents of the 5-minute Queue (you can use the Queue Status to get the Queue Contents).  Every time you add a new Queue Element using Lossy Enqueue Element (so you automatically get the last 1000 points), replot your Graph.  It will appear to scroll, updating every 0.3 seconds.  Now you switch scales and want to see the last Hour.  Simply start using the Hour Queue (which updates more slowly, every 3.6 seconds) and do the same thing.

 

By all means, use Design Patterns to keep yourself sane.  Something like the QMH, or at least a State Machine, would probably be helpful.

 

Bob Schor

0 Kudos
Message 4 of 8
(4,841 Views)

HI Bob,

 

I've always wondering if this is essentially what NI does with graphs to make compress lots of data in a finite number of pixels, or if they have a bit smarter algorithm.  I've never tried to test my theory out.  Maybe I'll test that right now.

 

The smart idea I'm thinking of is suppose in that 4800 points that are getting averaged together to become one datapoint or one pixel, you actual have a very large percentage of consistent data, but a few points that are extremely large or small that are noteworthy to see even on a "zoomed out" view, but would disappear in the average if all you did was average them to compress them down to one representative pixel.  Perhaps the high, low, and average are all determined and plotted at a single X value.

 

EDIT:

Okay, now I'm sure NI does something like I was describing.  Here is the difference between a relative small graph getting 10,000 data points, and one generated by averaging every 100 points to get a 100 point graph

Download All
0 Kudos
Message 5 of 8
(4,834 Views)

Bob, That sounds like a serious headache.

What do people normally do when simultaneously logging & looking at data? Would it make more sense to use a waveform chart instead of graph?

 

Any other tips for looking at the same data over different periods of time?

0 Kudos
Message 6 of 8
(4,823 Views)

@cpip wrote:

Bob, That sounds like a serious headache.

 


As I recall (I did this as a demo a few years ago), it did require some facility with Queues and Producer/Consumer Design Patterns, but I was surprised at how efficient and "intuitive to use" it was.  I'm thinking of resurrecting (= re-developing) this for a project where I'm monitoring 24 subjects and want to see data (for a selected subject) collected at 10 Hz for just over 2 hours, at time scales from "the past minute" to "since the beginning".  

 

Bob Schor

0 Kudos
Message 7 of 8
(4,804 Views)

Hey, RavensFan.  Thanks for the note (sorry I didn't respond earlier).  We have an application where several of the channels are bio-potentials, including ENG (electro-nystagmograms, basically DC recordings of the potential across the retina that can be used to record the subject's horizontal and vertical eye movements).  We were sampling (and saving) all the A/D channels at 1KHz, but for on-line monitoring, were decimating the data by a factor of 50 for the purpose of displaying "what's happening" at a 20Hz update rate (we plotted 600 points, so we saw the last 30 seconds of data scrolling by).

 

I wrote a new version of this program, and for the plot, averaged every 50 points to get my 20Hz plot updates.  The students doing the study complained, as they used the "noisiness" of the ENG signal to tell them if the electrodes had "come unstuck" from the skin and become excessively noisy (I was essentially low-pass filtering the signal).  I replaced the average with "display the first of every 50 points" and they were much happier.

 

The "Take Home Lesson" is that if you are doing the processing (averaging, decimating, taking medians, other), you can choose what you do to make the graph show the data in the most meaningful way.

 

Bob Schor

0 Kudos
Message 8 of 8
(4,788 Views)