LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Long running measurement, time t0 of waveform

Solved!
Go to solution

Hi,

the vi  "DAQmx Read, Analog Wfm 1Chan NSamp" read a waveform wf  from the DAQmx task.

The waveform has the components:

t0 : the trigger time of the waveform.  
dt : the time interval in seconds between data points in the waveform.  
Y : the data values of the waveform.  
attributes : the names and values of all waveform attributes.

 

I assume the behavior to set t0 of the internal DAQmx function is:

First call (n=0) : wf[0].t0 = System Time of first sample

following calls (n=1...) wf[n].t0 = wf[n-1].t0 + wf[n-1].dt * ArraySize(wf[n-1].Y)

 

For a long running measurement the actual measurement time wf[n].t0 and the system time can differ by:

  1. different time base of system clock and time base of the sample frequency in the DAQ device
  2. automatic adjust of the system clock by time sync with a time server

I have to ensure, that the time t0 is correct in some limits.

So I compare after every call of DAQmx Read the time wf.t0 with the system time and raise an error if the time difference is higher than some seconds.

After an error the measurement procedure (DAQ Init, DAQ Read Loop, DAQ Close)  is restarted. So I loose some samples.

 

Has someone an idea for a better solution? Or has the DAQmx a integrated function to compensate the different time bases of system clock / DAQ Device.

 

 

Peter

 

 

 

0 Kudos
Message 1 of 8
(3,027 Views)

Hi Peter,

 

So I compare after every call of DAQmx Read the time wf.t0 with the system time and raise an error if the time difference is higher than some seconds. After an error the measurement procedure (DAQ Init, DAQ Read Loop, DAQ Close)  is restarted. So I loose some samples.

Instead of stopping and starting the task you might just save that "timing error" as an additional channel in your data file.

This way you can correct the timebase in your data analysis later on…

Best regards,
GerdW


using LV2016/2019/2021 on Win10/11+cRIO, TestStand2016/2019
Message 2 of 8
(3,024 Views)

We probably need to see some code.

 

Are you using "continuous mode"? If not, each waveform read can be separated by a random long time and has it's own t0 timestamp. Just guessing, I am not really a DAQ guy. 😉

0 Kudos
Message 3 of 8
(2,987 Views)

To rephrase Altenbach, if

  • you have decent DAQ hardware with a reasonably accurate and precise hardware crystal clock,
  • are sampling in "continuous" mode,
  • have your DAQmx Read loop relatively "tight", typically by "exporting" the data immediately out of the DAQ loop using some form of Producer/Consumer Design,
  • you have configured your DAQmx Read to output Waveform data, and
  • the DAQ hardware is working properly,

then the following should be true:

  1. Successive t0 components of your Waveforms should differ by precisely dt*N (where N is the number of samples you are acquiring).
  2. If you save the time you start the DAQ Read Loop, this Timestamp should match the t0 of the initial Waveform.
  3. Unless actual "clock time" of the measurements is otherwise really important, t0 can largely be ignored (as it is "known", or at least "computable").

Bob Schor

 

0 Kudos
Message 4 of 8
(2,955 Views)

Hi all,

the code is similar to the continues examples, i.e. examples\DAQmx\Analog Input\Voltage - Continuous Input.vi, but realized in a project with actor framework with a queue.

 

The assumptions of Bob are correct, so only once at start the time t0 is set from system time.

 

In the above example you can see the problem of unexpected change of system time:

 

Set computer time some minutes back.

Start vi with tdms logging.

Force an update from the internet time server to set correct time.

Stop logging.

 

In the tdms file you can not detect the time change, because t0 is only set at start of measurement.

The unexpected changing of time can occure by many reasons (older systems with windows embedded file based filter and problems of saving DST time zone, longer power loss and problems with accuracy of realtime clock,...)

 

And in small steps the differnet precision of  hardware crystal clock of computer realtime clock and DAQ hardware.

 

Saving the time difference of actual t0 and system time for the tdms chunks as extra channel as Gerd recommended  would be a solution without restart the measurement. Because of possible tdms defragment the original sample size of the chunks must also stored.

 

Peter

 

 

 

 

 

 

 

0 Kudos
Message 5 of 8
(2,925 Views)
Solution
Accepted by topic author Peter_S

I was in a fairly analogous thread a little while ago, but in that one the OP wanted to treat the realtime PC clock as the master and make adjustments to the DAQ timestamps.  You'll see in that thread that the measured time discrepancy had a very linear behavior.  So you can probably do what GerdW suggested and measure the discrepancy, but at a much lower rate than the sampling itself.  In between, it will tend to have varied linearly.

 

The remaining exception is for step-wise adjustments from a time sync process.  Since you probably won't be trying to figure out *exactly* when this happens +/- 1 sample, you just need to figure out how much uncertainty you can bear in knowing where this adjustment took place within your data stream.  That'll determine the rate at which you should measure and store the time discrepancy.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 6 of 8
(2,912 Views)

Hi Kevin,

thank you for the interesting thread. I have to made an similar test vi, also with logging of the system time update changes from ntp server, which should the real time base. Up to now it is enough to check after every daqmx read the time difference to system time and restart measurement:

  • in an uncritical night hour, if difference > 2 s
  • else if difference > 10 s

 

Peter

 

0 Kudos
Message 7 of 8
(2,894 Views)

Brief little bit of forewarning:

 

Adjusting on every DAQmx Read *might* prove to be over-aggressive.  The system time query is executing on software and OS timing.  The time jitter between any 2 consecutive queries is likely to be very much larger than the actual amount of time skew you're trying to compensate for.   In the process of trying to compensate for a tiny (but predictable and accumulating) error, you'll be adding a much larger (and unpredictable but non-cumulative) jitter error.

 

Think it over to figure out what works best for your particular application.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
0 Kudos
Message 8 of 8
(2,887 Views)