LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Continuous DAQ stream implementation

Solved!
Go to solution

Hi everyone,

 

I'm after some advice from more experienced LabVIEW users around here for a VI that I'm trying to implement.

 

The main priority for this VI is to stream data (voltage) as fast as practically possible. For the DAQ module that I'm using (PXI-6289) this is capped at a little over 66 kHz. Other than making sure that the consumer loop saves to file (TDMS) fast enough to avoid overflowing the queue, there's some small data manipulation involved. This is some simple arithmetic to get the true voltage value based on the amplifier gain setting. Other than that, I'd like to plot the data stream to an XY graph.

 

Some specific questions that I have are:

 

How often should the VI be writing to the TDMS file so that it doesn't slow things down too much?

 

When is it appropriate to add the functionality to plot data to graph and do the math while minimizing delays? Lag from 'real-time' is absolutely ok for me since we are just plotting data from a queue and that's understandable.

 

Similar to the last question, where should other functionalities be placed? Say I'd like to run another VI that turns on another equipment and polls user input such as an off/on button.

 

Any suggestions/advice will be greatly appreciated!

 

Thanks in advance!

0 Kudos
Message 1 of 13
(2,876 Views)

Post your code so we can give more focused advice on what you are doing. I would make sure that you have the save code in a different loop. How often you save will depend on your system. The hard drive specs and CPU will drive this. If you are not keeping up you may need two loops to save the data. At some point hard drive space and file size will be the issue depending on what OS you are using.

Tim
GHSP
0 Kudos
Message 2 of 13
(2,832 Views)

66 kHz sampling shouldn't present any major difficulties for streaming to disk.  A typical "rule-of-thumb" starting point is to read about 1/10 sec worth of samples at a time, and I suspect that rule should work fine for your situation.

 

The thing that might prove trickier to manage is your X-Y graph, depending on whether you're trying to trace the *full* history or not.  You will probably need to do some reasonable data reduction for the graph.  Keep in mind that any onscreen graph will have something in the order of 1000 pixels in each dimension, so you could never see all the detail from, say, 10 seconds worth of *unreduced* 66 kHz data anyway.

 

As to TDMS, there are some properties you can set to optimize the disk cache size, though I suspect it won't be necessary at your 66 kHz sample rate.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 3 of 13
(2,830 Views)

Following up on Kevin Price's remark, think about looking at video -- 60 frames/second is not bad, and if you are looking at a changing chart/graph, you won't be able to see detail much faster than this.  So (doing some calculations on the back of this envelope I have lying around), 66 kHz sampling displayed at 60 FPS means that you display one point for every 1000 that come in.  Depending on your situation, you may want to average every 1000 points (pretty easy to do), or decimate every 1000 points.  You might say "Why, average, of course!", but when I did this for code I was doing for some of my students, they complained, as it was the presence of noise in the signal that alerted them to the possibility that the electrodes on the subject had come loose.  Averaging/decimating is really fast, while updating a display is several orders of magnitude slower, so this is a good trade ...

 

Bob Schor

0 Kudos
Message 4 of 13
(2,817 Views)

Edit:

Apologies for not attaching any files, I was still in the process of setting up something crude.

 

I've attached the draft VI here. Sub VIs are not included but should be enough to give an idea of the implementation.

0 Kudos
Message 5 of 13
(2,789 Views)

Hi Bob,

 

Thanks for the suggestions, much appreciated and useful. I really like the idea of decimating the array since it's only for visualization so I've implemented that.

 

I'm puzzled by the data that's coming out though. I don't think I understand the DAQ's number of samples and sampling the rate. I mean regardless of the number of samples set, you should get out samples equal to the sampling rate right? Assuming no delays between each sample cycle. So if something is sampling at 100 Hz, I'm expecting 100 data points every second.

 

Here, the higher the number of samples I put, the less data I get out each second. Am I missing something here?

 

Many thanks,
Nick.

0 Kudos
Message 6 of 13
(2,787 Views)

FYI if you're worried about TDMS performance you can configure DAQmx to stream it to file as it's written, not in your data consumer loop. If you want to scale it first, use a DAQmx custom scale (created programmatically, not in MAX) first and AFAIK it'll stream the scaled values to file automagically.

Message 7 of 13
(2,785 Views)

That's pretty useful! Implementing it like that will just send all measured values to logging so the consumer loop doesn't need to write anything anymore right? I suppose I'll still need it to view the data and perhaps do some math on it?

0 Kudos
Message 8 of 13
(2,776 Views)
Solution
Accepted by topic author NickUoA

@NickUoA wrote:

I'm puzzled by the data that's coming out though. I don't think I understand the DAQ's number of samples and sampling the rate. I mean regardless of the number of samples set, you should get out samples equal to the sampling rate right? Assuming no delays between each sample cycle. So if something is sampling at 100 Hz, I'm expecting 100 data points every second


I may mess this up (as I'm going from "experience", not necessarily looking at my code to see how I "really" did it).  But ...

  • There are three numbers of importance:  the Samples per Channel that you tell DAQmx you are going to take, which defines the size of the DAQmx buffer;
  • The number of Samples per Channel you tell the DAQmx Read function to acquire each Read;
  • The Sampling Rate, how many Samples per Second the DAQmx Read really takes.
  • I'm assuming you are using Continuous Sampling, by the way ...

I just set up a little dummy DAQmx routine with Samples per Channel = 100, and a Sampling Rate of 100 Hz.

 

So I configure my DAQmx Read for 100 points, the size of the Buffer.  Every second, my plot updates by 100 points.  Makes perfect sense.

 

Now I run the same code, but tell DAQmx Read to read only 30 points.  How often do you expect I'll see my plot update?  Well, every 30 points at 100 Hz is 0.3 seconds, so my plot updates roughly 3 times faster, but otherwise shows the same data as the first example (just updates more often).

 

Finally, I ask DAQmx to read 200 points.  Guess what it does?  It reads 200 points, which at 100 Hz takes 2 seconds, so the plot updates every 2 seconds.  But the buffer only had room for 100 points, so I see (roughly) the last 100 samples of (as the buffer gets over written), not a good idea.

 

Of course, I have no idea how you set up your DAQmx code (the relevant bits weren't all included in your code segment).  The point is that you can't arbitrarily set the number of samples in the Read without thinking about where those data need to be saved during the sampling (the first parameter I mentioned, set in the DAQmx Timing function).

 

Bob Schor

0 Kudos
Message 9 of 13
(2,753 Views)

Hi,

 

Thank you once more for the explanation, makes sense now. I've reached a point now where I'm quite happy with the VI, it does what is required well. I ended up using DAQmx logger to write straight to TDMS taking this task away from the consumer loop. However, it feels quite limited though, I'm struggling to make changes like you can when setting up the TDMS file normally and not through DAQmx. It would be nice to have timestamps to the values. Not system time though, like the time it took to make that measurement. I suppose I could just make that myself by doing 1/Sampling rate. I suppose it's accurate enough?

 

I've attached the main VI and the Vi that sets up the DAQ. Any further advice or suggestions to improve the current state will always be appreciated!

 

-Nick.

Download All
0 Kudos
Message 10 of 13
(2,736 Views)