Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

Time delay in long-term recording for USB DAQ device

Solved!
Go to solution

Thanks Kevin. This workaround is also similar to what I thought. For the writing data issue, I can only ignore it or interpolate the missing points for each cycle.

0 Kudos
Message 11 of 19
(1,665 Views)

Just to be clear, there's no reason to believe that there's any missing points.  You're (likely) getting all the data, there's only a slight discrepancy between the *nominal* value of dt and what the PC clock measures it to be (over an accumulation of hours).  At this point, we don't even know which one is more nearly correct.  It isn't automatically *correct* to believe the PC clock, it was just convenient for my example.

 

If also writing to file, you could do the adjustment I illustrated on *every* read, effectively treating the PC clock as gold standard (whether true or not).  However, you may get some step discontinuities if your PC runs some kind of time sync service (most pretty typically do).

 

Meanwhile, after running overnight I got an extremely steady skew resulting in about 1 second offset after 890 minutes which amounts to roughly 20 parts per million on my desktop board spec'ed for 50 ppm.  Here's a screenshot of the probe:

 

time skew probe.png

 

In the end, you've got to ask and answer the question: "What are the implications of storing data with a timestamp that may be wrong by around 1 minute per month?  How will it *really* hurt?   How much effort and $ is it worth to prevent or fix it?".  

 

Most apps I've worked on, it wouldn't really matter.  Data sources are correlated within the app and actual time-of-day isn't particularly important.  If it were, I'd consider storing a "time-tracker" channel, preferable at a very low rate (so possibly in a different file unless using TDMS or other multi-rate format).  The time-tracker would simply store the offset between DAQmx time and PC clock time, maybe once an hour or so.  Then the adjustment I illustrated could be done *if needed* as a post-processing step rather than always doing it real time for little likely gain.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 12 of 19
(1,661 Views)

morning Kevin,

I agree most of your opinions, except the losing data part. If we just run the program for a couple of hours, we surely don't know which timing is more accurate, PC clock or DAQ timebase. But if we run the code for a month or longer, I believe the discrepancy will keep increasing steadily during the recording. I might be wrong about this part, but I didn't see any sign the PC clock jump back a few seconds. So, I believe the PC clock is more accurate on the long run because it correct itself via Internet from time to time.

 

I ganna try an external timebase later to figure out the difference between Arduino and NI USB board once I got time. I didn't expect a USB device can do an excellent job on this. I just want to know what the best each bus (USB, PCIE, PXI) can do and is there anything I can do to improve it.

 

Thanks so much for sharing your ideas. I look forward to be convinced.

0 Kudos
Message 13 of 19
(1,657 Views)

One could do a some sort of resampling.

+-20ppm is +-one sample in a batch of 50000. 

fetch 50000+-1 sample 

do a FFT

from the complex output steal the last bin or add one zero bin

do a inverse FFT

the result is the signal, now with 50000 sample.

 

5ppm is one of 200k, still no big thing for current PCs , and 5ppm is less than 3min a year.

In your case (250Hz SR) and ~100ms/h you need to add one sample ever 144s (or 36k sample)

If you sample with 2500Hz you can correct your output every 14.4s, do 10:1 decimation and your PC will still be bored 🙂

 

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 14 of 19
(1,652 Views)

OK, take your PC clock as a reference. Since the internal timebase of your USB device differs about -27.7ppm, the dt of your data isn't 4ms, it's actually 4ms+27.7ppm  ...  OK, you lost data because your bandwidth was reduced by ~30ppm 😉  ..

So a simple solution migth be to change the dt of your wfrm from 0.004 to 0.0040001108 😄

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 15 of 19
(1,648 Views)

1. I only say this to try to help here, but if you think that the timing discrepancy means that you're losing data, then you have a very fundamental misunderstanding of the entire situation.  You're getting all your data, it's just that the *reported* timing of the samples doesn't agree with your PC clock's reported time.  It's *NOT* the case that after a month of running without error, the task is now delivering data that's 1 minute old.  It's delivering data that's fresh to within a small fraction of a second, but it's *reporting* a minute-old timestamp due to the way it calculates t0 for each waveform segment.

 

2. The timing discrepancy is fundamentally a % error or parts per million kind of issue.  So *of course* the discrepancy will tend to increase steadily when running continuously for a month or more.  You are integrating a tiny little constant for a *very* long time.  That's what happens.

 

 

3. Timing accuracy is based on specs for a given oscillator on a given board.  It isn't inherently bus-related, it's only that a very low cost USB device isn't likely to incorporate a more costly high-accuracy oscillator.

 

4. There are several ways to "improve" things, but it's important to consider "at what cost" and "for what purpose".

 

5. Given all this, it seems you're still pretty committed to using the PC clock as a master time reference.  Then just do the offset correction I illustrated after *every* DAQmx Read instead of only once every 10 minutes.

 

6. While I'd agree that it seems like a hassle to have to correct for this timing discrepancy, I would *not* be in favor of DAQmx doing this kind of thing automatically.  I think NI made the right choice in having DAQmx allow timing to be defined by the data acq hardware.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 16 of 19
(1,636 Views)

Here is a vi you can place between your DAQ and the display/storage.

It should continously adapt the dt and the t0 of continously captured data to a relative time difference.

For a 100ms/h lag enter  +27.7e-6  time difference 😄

No resampling is done. If you change the rel. timing diff while runnig, funny stuff can happen!

 

Not tested.... not my homework 😉

 

continously change wfrm dt 01.png

Greetings from Germany
Henrik

LV since v3.1

“ground” is a convenient fantasy

'˙˙˙˙uıɐƃɐ lɐıp puɐ °06 ǝuoɥd ɹnoʎ uɹnʇ ǝsɐǝld 'ʎɹɐuıƃɐɯı sı pǝlɐıp ǝʌɐɥ noʎ ɹǝqɯnu ǝɥʇ'


0 Kudos
Message 17 of 19
(1,621 Views)

Thanks Kevin. I know you are right on this. And the time accuracy spec for USB-6001 ±100 ppm is larger than what you expected. Your answer also has been endorsed by an engineer from NI support service. Thanks again for your help.

0 Kudos
Message 18 of 19
(1,581 Views)

I'll try that later. Thanks Henrik.

0 Kudos
Message 19 of 19
(1,572 Views)