LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Synchronization with Queues and DAQmx Read Functions

Solved!
Go to solution

Hello,

 

I am trying to acquire data and stream it to a TDMS file on disk continuously from multiple devices at the same rate. These devices are all PCIe-6343. 


I have attached the VI I use for this purpose. I do believe that I have these devices in synchronization as I tested it out with a signal generator. 

 

The problem arises when I try to write data to the TDMS file. Although the sample clock is shared, I am noticing that, at times, the value in my TDMS sheet for device 2 come out as zeros. I use queues to transfer data between the two loops. 

Can someone point out what I am missing here? 

 

Thanks

0 Kudos
Message 1 of 10
(2,251 Views)
Solution
Accepted by topic author yezdi777

You failed to set the number of samples to read in the DAQmx Read.  In your DAQmx Timing VI, you left Samples per Channel unwired, so it used the default value of 1000 (I recommend always wiring this important parameter!).  When you don't specify number of samples per channel in the DAQmx Read (see the Help for this), it samples as many points as are currently available when you are doing Continuous Samples (which you are).  If you, instead, ask it to do the same number of Samples as "Samples per Channel" (and use the same wire to connect this value to both places it is needed), then both DAQmx Reads should "block" (meaning "wait until all of the samples have been acquired and are ready to be output") until both finish, more or less at the same time.  Now you can use Build Array to combine them before putting them in the Queue.

 

Bob Schor

Message 2 of 10
(2,244 Views)

Hello Bob Schor,

 

Thanks for the suggestion. I tried setting the number of samples to 100 and the rate to 100 Hz in the DAQmx Timing VI and it looks like everything works fine. It was a lifesaver. I do have a couple of follow up questions. 

 

1) To experiment with the rate, I tried increasing the rate in Dev2 and it looks like I get an even array. I had set the rate to 500 Hz,  and the number of samples to 400, ran it for 4 seconds and I generated 2000 samples on both devices. In Dev 1, the rate was 400 Hz. So, did device 1 just have previous data stored in queue which got written? 

2) Also to ensure that I do not overload the FIFO buffer, I need to make sure that I do not generate more than 2047 samples before I empty the queue right? In my calculation, I only can do around 125 samples/channel if all my 16 AI channels are active. (2047 samples/16). My sample rate will control the timing of the while loop, so I don't really have to worry about it as I am below the limit of 500 kSample/s for the card right? 

 

3) Since I am using two devices in the same queue, do I need to account for the channels of Dev2 in the calculation for Dev1's FIFO? I ask this because Dev2 has it's own FIFO memory and it shouldn't really affect the data acquisition on Dev 1 right? The number of samples/channel should be 2047/16 for Dev1 and 2047/3 for Dev2? 

 

 

Thanks

0 Kudos
Message 3 of 10
(2,167 Views)

Hello Bob Schor,

 

Thanks for the suggestion. I tried setting the number of samples to 100 and the rate to 100 Hz in the DAQmx Timing VI and it looks like everything works fine. It was a lifesaver. I do have a couple of follow up questions. 

 

1) To experiment with the rate, I tried increasing the rate in Dev2 and it looks like I get an even array. I had set the rate to 500 Hz,  and the number of samples to 400, ran it for 4 seconds and I generated 2000 samples on both devices. In Dev 1, the rate was 400 Hz. So, did device 1 just have previous data stored in queue which got written? 

2) Also to ensure that I do not overload the FIFO buffer, I need to make sure that I do not generate more than 2047 samples before I empty the queue right? In my calculation, I only can do around 125 samples/channel if all my 16 AI channels are active. (2047 samples/16). My sample rate will control the timing of the while loop, so I don't really have to worry about it as I am below the limit of 500 kSample/s for the card right? 

 

3) Since I am using two devices in the same queue, do I need to account for the channels of Dev2 in the calculation for Dev1's FIFO? I ask this because Dev2 has it's own FIFO memory and it shouldn't really affect the data acquisition on Dev 1 right? The number of samples/channel should be 2047/16 for Dev1 and 2047/3 for Dev2? 

 

 

Thanks

0 Kudos
Message 4 of 10
(2,163 Views)

Hello Bob Schor,

 

Thanks for the suggestion. I tried setting the number of samples to 100 and the rate to 100 Hz in the DAQmx Timing VI and it looks like everything works fine. It was a lifesaver. I do have a couple of follow up questions. 

 

1) To experiment with the rate, I tried increasing the rate in Dev2 and it looks like I get an even array. I had set the rate to 500 Hz,  and the number of samples to 400, ran it for 4 seconds and I generated 2000 samples on both devices. In Dev 1, the rate was 400 Hz. So, did device 1 just have previous data stored in queue which got written? 

2) Also to ensure that I do not overload the FIFO buffer, I need to make sure that I do not generate more than 2047 samples before I empty the queue right? In my calculation, I only can do around 125 samples/channel if all my 16 AI channels are active. (2047 samples/16). My sample rate will control the timing of the while loop, so I don't really have to worry about it as I am below the limit of 500 kSample/s for the card right? 

 

3) Since I am using two devices in the same queue, do I need to account for the channels of Dev2 in the calculation for Dev1's FIFO? I ask this because Dev2 has it's own FIFO memory and it shouldn't really affect the data acquisition on Dev 1 right? The number of samples/channel should be 2047/16 for Dev1 and 2047/3 for Dev2? 

 

 

Thanks

0 Kudos
Message 5 of 10
(2,164 Views)
Solution
Accepted by topic author yezdi777

I wanted to take a look at the code yesterday but for some odd reason, every time I tried to open the diagram LabVIEW would crash.  After a fresh reboot this morning, things were back to normal so let me add a little to Bob's earlier advice.  I'll start with your questions #1-3.

 

1.  Reading the same # of samples from tasks sampling at different rates will end up pacing your loop according to the task with the slower sample rate.  So your reads will be "keeping up" with the 400 Hz task, but you'll be building up a backlog in the 500 Hz task.  In your experiment, the 5 second run time produced 2000 and 2500 samples respectively.  You retrieved the first 2000 of them from each task, leaving 500 leftover in the buffer for the 500 Hz task.  

    In general, keeping your data in sync will tend to require a little bit of choreography that involves both DAQ hardware timing signals *AND* proper handling of the data you read from the tasks.

 

2. You don't need to worry about the device's FIFO -- the DAQmx driver will take care of delivering data from the device to your task buffer in the background.  Your job is to make sure the *task buffer* doesn't overflow.  You do so through a combination of buffer size and the rate at which you read data from the task.

    You can make the task buffer quite large if you like.  The way the task buffer size gets set follows rules that can seem a little weird and unintuitive.  For Continuous sampling, the value you wire into the 'samples per channel' input of DAQmx Timing sets a *minimum* buffer size, but DAQmx might override you and make a bigger buffer than what you asked for.  See this article.

    A typical rule of thumb is to read 1/10 sec worth of samples each loop iteration.  If you follow it, you need not have any problems keeping up with 500 kHz sampling while reading 50k samples per iteration.  *Actually* not having problems will depend on other aspects of your code though.

 

3. Again, you aren't going to need to concern yourself with the onboard FIFOs.

 

Now then, onto some general comments.

 

A. The two tasks both use an external sample clock, but they source their sample clock signal from different PFI pins.  No matter what number you wire in as the 'rate', those signals themselves will define the actual rate.  (As mentioned earlier, the number you wire in as the estimated rate *may* be used to increase your task buffer size.)

    Note further that DAQmx will dutifully *believe* the value you wire and would use it in waveform data if you did your reads in terms of waveforms rather than 2D arrays.  Something to watch out for in the future.

 

B. The two tasks are both triggered from distinct PFI pins.  Your two tasks may indeed be sync'ed to start acquiring at the same time, but that'll depend on the triggering signal(s).  If, for example, the triggering signal was a constant 1 MHz clock, your tasks would likely be triggered at two different cycles of that clock.

 

C. The best approach for timing your reads and enqueuing your data depends *very much* on your hardware sync.  If both tasks truly share the same sample clock signal, share the same trigger signal, and the trigger event cannot happen until after both tasks have been started, then (and ONLY then) you can time the loop solely by calls to DAQmx Read that request the same # samples from each task, and you can append the two 2D arrays into a bigger 2D array before enqueuing.

    However, if you have 2 different sample rates or if either external sample clock signal can have a variable rate, you should approach the whole thing differently.

    In that scenario, I'd recommend a different datatype for enqueuing.  It should be a cluster containing two 2D arrays -- one from each task.  The reason is that they will (or at least might) contain different #'s of samples.  As with almost all clusters, you should make a typedef out of it.  

    I'd change loop iteration timing over to be based on a msec Wait timer set for maybe 100 msec.  Then I'd revert to your original method of calling DAQmx Read with -1 wired in as the number of samples, meaning "give me all you've got."  You should also set up dataflow to make sure the Wait completes before either Read function is called.

   It would also be possible to pace the loop by first calling one task to Read a fixed # samples, then calling the other to read "all available samples" (by wiring in -1).

 

D. Only in the very special case of perfect hardware sync can you combine both tasks' data into a single 2D array for the sake of writing to TDMS.  In the more general case, you'll need to write each task's data separately.

 

E. General note: your TDMS data logging strategy needs work.  You have an infinite loop, and you always enqueue but conditionally dequeue.   You need a way to stop the logging loop cleanly and you should probably conditionally enqueue while always attempting to dequeue.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 6 of 10
(2,121 Views)

Hello Kevin,

 

Thanks for taking the time out to clarify these concepts. It was pretty hazy for me and I have a much better sense of things now. 

 

In reply to your general comments, I do have the two devices sharing the same sample clock and trigger signal. I do so by exporting the Internal Analog Input Sample Clock using the DAQmx export signal and a start trigger from Dev1. Dev2 receives these signal. So, for the task on Dev1, it is an internal sample clock right? Device 2 receives an external sample clock. 

 

Thanks for the ideas of FIFO, using clusters and TDMS writing again! 

Rahul

0 Kudos
Message 7 of 10
(2,096 Views)

One little amendment about the queue datatype.  It's probably better to approach this using arrays of waveforms rather than 2D arrays of doubles.  My brain was stuck on avoiding waveforms due to another thread I'm involved with where an external sampling clock has a variable rate so the timing info embedded in waveforms would have made them wrong and misleading.

 

There's a version of DAQmx Read you can choose that will return data as waveforms rather than as 2D double arrays.  That approach would still be suitable if your tasks were configured with different sample rates too because an array of waveforms doesn't require that each individual waveform carries the same # of sample in its data.

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 8 of 10
(2,091 Views)

Oh yeah, one more amendment about syncing the tasks to one another.

 

I've been beating the drum for a long time around here about how often sync can be most simply achieved *solely* by sharing a sample clock.  It seems that most docs and help plant the idea of needing start triggers for sync, but it often just isn't so.

 

In your case, if you export the sample clock signal out from Dev1 and bring it in as the sample clock for Dev2, that alone will let you sync sampling between the tasks.  All you need to do is make sure to call DAQmx Start for Dev2 *before* calling it for Dev1.  That way Dev2 is ready to "see" and respond to the first sample clock signal that Dev1 produces.

 

You don't need to configure any kind of triggering at all, despite all the help docs that suggest otherwise.  In fact, sharing only a sample clock is usually a *better* method than sharing only a start trigger when you're syncing tasks on two different devices. 

    Each device has its own internal oscillator clock that gets divided down for use as a sample clock.  These oscillators are spec'ed with small accuracy tolerances such as 50 parts per million (0.005 %), and the actual frequency won't be quite exactly the same on two different devices.  (Note that these comments apply primarily to the desktop PCI/PCIe and USB devices I most commonly use.  cDAQ and PXI platforms have chassis features built-in to avoid or compensate for this issue.)

 

 

-Kevin P

CAUTION! New LabVIEW adopters -- it's too late for me, but you *can* save yourself. The new subscription policy for LabVIEW puts NI's hand in your wallet for the rest of your working life. Are you sure you're *that* dedicated to LabVIEW? (Summary of my reasons in this post, part of a voluminous thread of mostly complaints starting here).
Message 9 of 10
(2,084 Views)

Hello Kevin,

 

Thanks for the tip about the waveform datatype. It helped! 

 

Good to know that syncing with sample clocks is sufficient as long as Dev2 is started before Dev1. This can save a PFI pin for some other usage too. 

 

Again, I appreciate your insight on these issues. 

 

Thank you.

0 Kudos
Message 10 of 10
(2,054 Views)