LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Getting Exact Loop Timing for Data Measurements

Using the new timing idea worked well enough, but it really has exposed how poorly time is managed in Windows.  here's some data:

 

For 10 S/s

Avg dt StdDev Min dt Max dt
0.100113 0.003492 0.09045 0.112028

 

For 50 S/s

Avg dt StdDev Min dt Max dt
0.020041 0.004317 0.009963 0.025488

 

For 100 S/s

Avg dt StdDev Min dt Max dt
0.010021 0.003268 0.000025 0.017235

 

for 1000 S/s

Avg dt StdDev Min dt Max dt
0.001002 0.003116 0.000017 0.020219

 

As you can see, things start really going off the rails for higher and higher sample rates.  The loop itself is actually not timed, it just depends on how long it takes to get a reading from the DAQ system.  I believe what's happening here is, if the loop ran long in the previous cycle, it runs short in the next one.  You can see that in how adding the Min and Max dt values gets you pretty close to one Avg cycle time, with some jitter from the OS.

 

So what I've actually done now is, I decided to see how hard it is to get into the Producer Consumer world.  It doesn't seem to be that hard actually, so I'm sort of cautiously going in that direction.  If I can get that implemented before the end of the year, it doesn't hurt me.  I have a few questions about it I'd like to ask here, if it's not dragging this thread too far off the original topic.

 

  1. In the white paper (http://www.ni.com/white-paper/3023/en/), they show having shift registers for the queue, and just having a tunnel coming into the loop.  Which is the right way to do it?  From the little testing I've done, it doesn't seem to matter.
  2. What is considered proper and/or better for performance?  I would like to pass three variables from the producer loop to the consumer one: the current data measurements (DBL array), the current elapsed time (DBL array), and whether or not the test is complete (BOOL).  Should I make three separate 1-element queues, or cluster them and and let the consumer loop uncluster them?  The consumer loop is set to run at 10 Hz per GerdW's suggestion and because the part left in the consumer loop (running the test stand relays) really shouldn't need to be run at anything higher.

I know I said I was going to post a shot of where the program was at, but this seems like such a better idea that I'm going to take a little extra time to get this going first.

0 Kudos
Message 11 of 19
(821 Views)

Windows is a multi-tasking OS (as other OS's like Linux, iOS), not a priority-scheduled real-time OS. It is successful because it prioritizes user interactivity and multiple, dynamically run processes. That makes it good for general computer use and systems where user interaction is key (like your development environment for example). However that also makes it terrible for prioritizing CPU time; if memory serves each time slice is around 10ms assuming an application doesn't yield before that. If you built your example into an application you'd also discover that the jitter would subside somewhat as some of the jitter is a result of running the code in the dev environment. 

 

Figure 1 in that white-paper is adequate for what you are trying to achieve using a type-def'd cluster (so that you can change it later if needed). The shift registers don't actually help here since the queue is a reference and it does not change between each loop execution.

 

You are still going to get some jitter between each cycle of your multiple DAQ reads because, as a multi-tasking OS, Windows is free to schedule CPU time to any other task going on in the background. You can disable debugging and increase the VI priority so that it ends up on the execution system queue under a higher priority OS thread to improve this further. If jitter between the multiple reads is still a problem for you then you might consider an RTOS.

0 Kudos
Message 12 of 19
(815 Views)

Reviewing the thread, it still isn't clear to me that you're solving the right problem.  Some of this has been said before, but I'll try to make it concise:

 

  • If timing of data samples matters, rely on the DAQ sample clock
  • If loop iteration timing needs to be measured, use High-Res relative seconds.
  • If loop iteration timing needs to be *consistent*, don't count on Windows.

 

 

Please give more detailed background.  You mentioned early on that you're trying to change the overall timing scheme.  What was the old scheme and what was wrong with it?  What do you plan to do instead?  What improvements should your new scheme accomplish?

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 13 of 19
(804 Views)

I like your summary Kevin; simple and direct Smiley Wink

0 Kudos
Message 14 of 19
(801 Views)

The original timing scheme used the Elapsed Time express VI.  That worked OK, but it's really slow.  After that, I went to just going off the reported elapsed time from the cDAQ clock.  That worked better in terms of speed, but as the loop does other things than just log data, the loop wasn't iterating at exactly the same speed as the data was being collected.  That led me to give the High Resolution Relative Seconds timer a go, which showed that, while the average loop time was really really close to the data sampling interval, the amount of uncertainty was unacceptably large (see previous post for data).  So now I'm going to use the Producer-Consumer loop idea, put the data measurement in its own loop, and see how that works.  There should be no need to use the High Resolution Relative Seconds timer in this system.  I will have the working test in a day or so and will share it here.

0 Kudos
Message 15 of 19
(795 Views)

Several of us have suggested that you should rely on the DAQ device, which has a very accurate hardware clock, to control the sampling time.  When combined with a Producer/Consumer loop that "saves the free time" for the Consumer Loop, you get the best of both worlds, accurate timing (due to the DAQ's Sample Clock) and plenty of time for data processing (since the Producer Loop's DAQ Read "blocks" until the data have been acquired, then it dumps them to the Consumer Loop and goes back to the DAQ Read and blocking).

 

I don't recall seeing you post a VI that shows the DAQ code, so other than saying "you may be doing the timing wrong", we can't see what you are doing and can't suggest (other than words like what has already been said here) code to do it better.  So attach the sub-VI that includes the DAQ code (no pictures, please) ...

 

Bob Schor

0 Kudos
Message 16 of 19
(783 Views)

I agree with Bob, please post the loop/DAQ code.  And describe the bigger picture about timing.  You've been super focused on loop iteration timing, but haven't given background about why you think it matters so much.  I'm not saying it definitely doesn't, but I haven't been convinced that it definitely does.  I've done a lot of DAQ apps and can almost always structure them to be pretty insensitive to loop timing.

 

The key question to me is: are you using the measured value(s) to drive a control algorithm?  If so,

A. Windows isn't ideal, but you may be stuck with it

B. Software timed DAQ may be the lesser of evils due to *latency* concerns for hw-timed & buffered DAQ.  This is always a concern for output signals, sometimes a concern for input.

C. Faster loop iterations may not be particularly important.  Once you're fast enough to maintain a sufficient degree of control over your device or process, going faster won't change things very much.  You won't need to iterate an order of magnitude faster than your system's time constant.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 17 of 19
(767 Views)

Sorry for taking so long to respond.  The big picture for what the program needs to do is tun a valve test stand.  In order to do so, it needs to gather data at a potentially high rate, say 100 KHz, while also updating a UI and triggering a series of relays at certain intervals.  Those relays control valves on the test stand.

 

I haven't attached a VI because it relies on too many sub-VIs, which are proprietary and anyway besides the point that I'm trying to illustrate.  I use this particular VI just to prototype relay timing stuff, so it isn't a complete test stand control loop, but I think it's enough so show what I'm up to.

 

In the top loop, I collect all of the sensor data at whatever rate is necessary.  That loop also keeps track of the elapsed time by adding up the "dt" values from an Analog 1D Waveform DAQmx Read.  The data measurements and associated dt values are collected into arrays and sent off for data processing after the loop is done.  DAQ AI Start, Loop, and Stop and just a convenient way of managing the DAQ communication as per the example Current- Continuous Input.  I imagine you all do something pretty similar.  The data queues are all single member, and just exist to pass the latest associated reading to the bottom loop.

 

In the bottom loop, the queues are read so that the data can be displayed in the UI, and so this loop will end when the top one does.  It is set to run at 10 Hz because that's probably all a user really needs and is also sufficient for timing the relays.  I guess if that ever changed, I could put the relay management in the top loop, or perhaps a second consumer loop?  Are you allowed to have one producer and multiple consumer loops?

 

To answer Kevin's questions,

 

A. I am stuck with Windows.

B. I have tried software timing the loop, but it doesn't work.  I had a table with data on that a little further upthread.

C. I believe that the top loop only iterates as fast as data is measured from the DAQ, so that's probably not wasting too much power.  

0 Kudos
Message 18 of 19
(747 Views)

This all looks very wrong, but it difficult to analyze a picture. What are the various inputs to the driver VIs?

 

You are building arrays an autoindexing output tunnels of while loops, and since the number of iterations is not predetermined, they could grow quite large and nothing is saved if the VI creates an error, even if the first 99% of the data is useful. (the ones in the upper loop will be eliminated as dead code unless you actually use the outputs (currently nothing connected!) but building that string array in the bottom loops does not look right)

0 Kudos
Message 19 of 19
(739 Views)