LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

How do I ensure there is no data lost (clock latency) between while loop iterations that calls a DAQmx Read VI?

Solved!
Go to solution

How do I ensure there is no data lost (clock latency) between while loop iterations that calls a DAQmx Read VI?

 

In the block diagram below, I am acquiring 10k samples at 100k Sa/sec.  The loop does not iterate until the task is complete, and continues cycling unless there’s an error or the operator presses the stop button.

 

Hardware used: USB-6341

Software used: LabVIEW 2011 SP1

AI_loop_block_diagram_screen_shot_01.JPG

0 Kudos
Message 1 of 8
(2,884 Views)
Solution
Accepted by topic author Agile

Most of what you showed looks quite sound.  The main quibbles are a couple things that might have been just for illustration / proof of concept.

 

Quibbles:

- building an unknown size array of u32 "timestamps" on a while loop

- forking the AI data to more places than just the Enqueue function

- destination of those forked AI waveforms might be GUI indicators that could occasionally (?) slow down the loop.

- you aren't specifying a # of samples to Read inside the loop.  It'll default to read "all available", and with nothing to

throttle down the loop speed, that'll keep being a very small number.  You'll be better off specifying something like the

10k you specified in DAQmx Timing (which doesn't automatically carry over to be used for Read's).  Your might get

some efficiencies in your downstream consumer loop by producing data arrays of fixed size.

 

Solutions:

- move all that stuff to the loop containing the queue's consumer.  You could consider passing the cluster of "<relative timestamp, AI waveform>"

through the queue and make sure the producer loop doesn't fork the AI waveform to any other destination.  Then the consumer could re-accumulate

the array of times as needed. 

 

Comments:

- DAQmx is probably making a bigger buffer than the 10k you requested.  The DAQmx Timing vi treats the "samples per channel" as a mere

request when doing Continuous Samples.  If you ask for too small a buffer, it'll override you and make a bigger buffer automatically.  IIRC,

I think it's often in the 2-10 sec range somewhere.

- you might timeout your first AI Read if the trigger doesn't come soon enough

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 2 of 8
(2,864 Views)

Why do you use "Is Task Done" at all? It serves no purpose.

 

And avoid unlimitid growing data structures, like the array "AI Times" and the queue.

 

@Kevin: Default buffer size is normally (with some exception) a size that it matches between 100..1000ms acquisition time. Exceptions can go up to about 10s.

 

Norbert

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 3 of 8
(2,850 Views)

Thanks Norbert for the correction on default buffer sizes.  For anyone's future reference, more details can be found here.

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 4 of 8
(2,847 Views)

Thank you both, Kevin and Norbert, for a quick and through response and helpful criticism on writing (drawing) tighter code.

 

Perhaps I should clarify a couple of points that are distracting from the question, “How do I ensure there is no data lost (clock latency) between while loop iterations?” 

 

1.  building an unknown size array - The “Tick Count” VIs at the top of the loop were used to determine timing variations between loops. They were for diagnostic purposes only, and would be pulled out before the final version.   You made a good catch, though, on not defining the array size before the loop.  It’s EVIL.Robot Mad

 

2.  forking the AI data to more places - again diagnostic only.  Sorry for the misrepresentation.

 

3.  you aren't specifying a # of samples to Read inside the loop - Ahh, there’s the ticket, mate.  I’ll have to try that.  As it is currently written, there is a loop-to-loop time variation, and with it, a variation in array size of the data within the waveform.  The longer the loop duration, the more data there is in the waveform array. The data/duration relationship makes sense and its ratio equals the “rate (samples/sec)” setting for the Sample Clock.  I did not realize, as you said, that the Sample Clock rate “doesn't automatically carry over to be used for Read's.” I will tie them together and test.  BTW, how do you know this; just through experience, or did you read it?

 

4.  you might timeout your first AI Read if the trigger doesn't come soon enough - Do you mean, assign a value to the timeout input of the DAQmx Read?  If so, good point, although in this case, it’s hardwired to a pulse train output of another line with in the program and “should” always fire within ms of this one.

 

5.  Why do you use "Is Task Done" at all? It serves no purpose. - Because I don’t understand it well enough.  If you have a link to more info on it, I would appreciate it.

 

I’m going to specify the “number of samples to read” and “timeout” inputs on the DAQmx Read VI, and retest this afternoon.  I’m expecting the loop-to-loop duration and waveform data size to be consistent. 

 

BUT, this doesn’t answer the original question, “How do I ensure there is no data lost (clock latency) between while loop iterations?” 

 

Thanks again,

Darrell

0 Kudos
Message 5 of 8
(2,839 Views)

Re: original question - the DAQmx driver manages your iterative calls to DAQmx Read in such a way that it returns you a lossless stream of data.  No samples are lost, unless you iterate so slowly that you overrun your buffer, in which case DAQmx Read will return an error.  Hence the attention being paid to making your loop as efficient as possible -- if you take care of that, the losslessness is handled by the DAQmx driver.

 

I kinda skipped over a direct answer to the question because the rest of your code led me to assume you'd already know that.   And that's meant as a a compliment, BTW.  

 

A couple more specific comments:

 

3. Just to be clear -- the sample clock RATE is not affected by the Read calls.  I was referring to the "# samples" input to DAQmx Timing which is used to "suggest" a buffer size.  This is entirely separate from the "# samples to read" input to DAQmx Read.  When left unwired, the default value is -1, a magic number which is interpreted as "read all available samples".  If you request, say, 10000 samples, the Read call will block until 10000 are available and then return them all at once.  While waiting, it doesn't burn a lot of CPU either.  Note that there's also a timeout input to the Read function.  Left unwired, I believe the default is 10 sec.

   At this point, I've both read and experienced this stuff so many times I'd be hard pressed to say which came first.  These things definitely are documented, but it isn't always all in the same place.

 

4. Fair enough on the triggering if you control the trigger signal.  For future reference when you can't predict the trigger time, I'd recommend adding some trigger detection code prior to the main loop.  You could just add a DAQmx Read with a super-long timeout value there, or maybe a loop that keeps trying to Read, and terminates on success while ignoring timeout errors.

 

5. "Is Task Done?" is mainly useful for finite acquisitions.  A continuous acquisition will never be done except when there's been a task error.  Your Read call would already have returned that error, so no need to check again.

 

 

-Kevin P

 

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 6 of 8
(2,826 Views)

@Kevin Price wrote:
[..]5. "Is Task Done?" is mainly useful for finite acquisitions.  A continuous acquisition will never be done except when there's been a task error.  Your Read call would already have returned that error, so no need to check again.

 


To add to Kevins answer for this, i want to point out that finite acquisitions should never be that long that you require this feature.

Main use case for "Is Task Done?" is continuous generation when configuring for regenerative generation. Regenerative generation means, that you write the data once inot the output buffer and it will be repeated over and over again. You will not "refill" the buffer. So since you still have a loop (to keep the program running), but not writing any more data, you most probable want to know if there are any errors in your generation. That's where you'd use "Is Task Done?".

Kevin already pointed out that the task will not be done, but you will request error information if there are any.

 

Norbert 

Norbert
----------------------------------------------------------------------------------------------------
CEO: What exactly is stopping us from doing this?
Expert: Geometry
Marketing Manager: Just ignore it.
Message 7 of 8
(2,796 Views)

Thanks again for the input kudos to you both. Here is a screen shot of the final.  It is actually integrated into a larger VI, but I simplified it for the sake of discussion.  The "No. Samples/ch" input is supplied by a VI that creates a pulse train.  The pulse train length in milliseconds is multiplied by the number of samples per ms (i.e. pulse train length of 70 ms x 100 samples/ms.  This way, the waveform generated per loop iteration contains one pulse train with the analog data concurrent with it.

AI_loop_block_diagram_screen_shot_02.JPG

0 Kudos
Message 8 of 8
(2,775 Views)