09-17-2009 07:45 AM
I have two LabView-RT systems:
A. PXI for measuring about 80 analogue input channels with 5 kHz (5 samples per msec)
B. Desktop PC for triggering events and sending a reference time via a 1 kHz pulse
The 1 kHz pulse from A is increasing AI card's counter on B. Inside a loop on B I am measuring 1000 AI samples together with 1000 counter values. The timing source for counter measuring is set to the ai/SampleClock so that AI and coounter values are sampled at the same time.
I an ran the measuring almost 3 days without a pause. A activates an electrical load for some seconds each hour while sending the kHz-Pulse. A defines time by its deterministic loop. After analysing B's measuring data, I noticed that B has an counter offset that is moving up and down ~26/27 msecs between each load activation. Each counter value (A's time in msec) is dividable by 1 hour (3600 * 1000) exept for the last digits (looping offset) and represents a "load on" event:
...
18000047 increasing offset...
21600074
25200100
28800127
32400154
36000180
39600006 offset drops back
43200033 increasing again
46800059
50400086
54000112
57600138
61200165
64800191 offset drops back
68400017 offset increasing again
...
This pattern goes on for 3 days. So in general the times stay sync except for this looping offset with step 27 and ~200 msecs limit before dropping back to ~0. I suspect buffering as I can vary this pattern by changing sample rate and DAQ loop interval.
What is the reason for this looping offset? Is there a better way to acquire AI and counter values synchronously?
09-18-2009 05:13 AM
09-18-2009 06:10 AM
I attached a screenshot of my generator code, that uses a deterministic loop to send the 1 KHz pulse (time information for A) and an on/off-signal to the electronic load. Trigger times for both are determined by a modulo operation. I did already log the number of sent samples to a file for checking reason. The code seems to do what it should do.
On A's side, the offset limit (before dropping back) seems to depend on this relation:
(1 / Sample rate (KHz)) * 1000 := looping offset limit
For 8 KHz this means (just looked into my measurement data, extract):
09-18-2009 09:01 AM
More precise would be, as I figured out now:
(1 / sample_rate_khz) * number_samples_to_read
I let my DAQmx read VI read 500 samples instead of 1000 while staying with 8 kHz rate. The looping offset limit is now 62.
Why are these two read parameters producing such a looping offset?
09-18-2009 10:46 AM
09-19-2009 09:16 AM
a) your expectation is correct. After 8 samples (to take the 8 kHz from above) the counter will increment:
column 1: counter value, column 2: analogue sample from electronic load
1 20.1326 1 20.1326 1 20.1326 1 20.1326 1 20.1326 1 20.1326 1 20.1326 1 20.1326 2 20.1326 2 20.1326 2 20.1326 2 20.1326 2 20.1326 2 20.1326 2 20.1326 2 20.1326 3 20.1326 3 20.1326 3 20.1326 3 20.1326 3 20.1326 3 20.1326 3 20.1326 3 20.1326 4 20.1326 4 20.1326 4 20.1326 4 20.1326 4 20.1326 4 20.1326 4 20.1326 4 20.1326 5 20.1326 ...
The extract from my post before was filtered so that you only see the counter values when an event occurs (i. e. load switching). This filtering explains the 10kS because the load is switched on every 10 seconds for 2,5 seconds.
b) Is it possible that the do and ao signals from the generator drift apart? Shouldn't the deterministic loop keep them sync (i. e. keep them inside a short time frame)? How can I sync both signals in a better way?
As you can see in the data above, I fixed the ctr task start now.
09-21-2009 04:13 AM
Hi pgraebel,
the timed loop structure's determinism only guarantees [*] that the contained code will execute in the loop given period - in your case a 500ms time frame.
As the values you are writing to ao and do are predeterminable, you should replace the timed loop with a buffered continuous ao and do signal generation in a "normal" while loop. Routing the analog output's sample clock to the digital output timing node source input to will then cause a hardware-synchronized output of both channels.
The buffered output will require a block of samples written to the buffers before starting the task, and a subsequent block written timely during the output of the first sample block.
There is no explicit example for that code in LabVIEW's example finder, but you should be able to count one and one together when looking at the following two VIs:
Synch'ing AI and DI:
C:\Programme\National Instruments\LabVIEW 8.6\examples\DAQmx\Synchronization\Multi-Function.llb\Multi-Function-Synch AI-Read Dig Chan.vi
Synchronizing AO and DO uses the same mechanisms for exporting the ao/SampleClock to the DO task, but remember to write the first sample blocks to the buffers the same way as shown in the following AO VI:
C:\Programme\National Instruments\LabVIEW 8.6\examples\DAQmx\Analog Out\Generate Voltage.llb\Cont Gen Voltage Wfm-Int Clk-Non Regeneration.vi
HTH,
Sebastian
[*] if the code is executable in less than the period specified and sufficient CPU resources are available