LabVIEW

cancel
Showing results for 
Search instead for 
Did you mean: 

Delay in DAQmx writing multiple samples

Solved!
Go to solution

I'm trying to create a simple program that writes an array of values to an analog output board. Then reads the resulting voltage and current. This process is in a loop and the voltage output of the next iteration is dependent on the voltage and current that were read in the previous iteration. I've attached a sample VI where you can see my trials. In this sample VI, I've removed the automatic voltage adjustment and replaced it with a manual control for demonstration purposes.

 

While I'm able to set the voltage, it is actually applied a few seconds later, I measure the same with a multimeter. See the screenshot below:

DAQmx write delay example.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Because the write and read are in the same loop, with the same sample rate and the same number of samples written/read, each point measured should correspond to one iteration. So I don't understand why there could be this delay. The applied voltage should change every iteration. Can anyone help me?

0 Kudos
Message 1 of 8
(2,309 Views)

What hardware are you using?


GCentral
There are only two ways to tell somebody thanks: Kudos and Marked Solutions
Unofficial Forum Rules and Guidelines
"Not that we are sufficient in ourselves to claim anything as coming from us, but our sufficiency is from God" - 2 Corinthians 3:5
0 Kudos
Message 2 of 8
(2,303 Views)

@crossrulz wrote:

What hardware are you using?


NI PXIe-6355 for the input.

NI PXIe-6738 for the output.

0 Kudos
Message 3 of 8
(2,297 Views)

Please share the VI in 2016 for wider reach.

 

You've not mentioned if there is a load or DUT, in any case, please share the complete connection schematic.

 

Note - 6738 has a maximum drive strength of 10mA, a large capacitor or load will definitely slow the rise time

Santhosh
Soliton Technologies

New to the forum? Please read community guidelines and how to ask smart questions

Only two ways to appreciate someone who spent their free time to reply/answer your question - give them Kudos or mark their reply as the answer/solution.

Finding it hard to source NI hardware? Try NI Trading Post
0 Kudos
Message 4 of 8
(2,284 Views)
Solution
Accepted by topic author Basjong53

First answer:

 

This is an inherent part of a buffered output task.  The buffer prevents underflow due to lack of data but it also introduces latency before the physical signal changes.

 

In fact, there will be 2 buffers involved. 1 is the task buffer you write to directly when you call DAQmx Write.  Then there's also a FIFO onboard the device itself.  The DAQmx driver manages the job of moving data between the task buffer and the FIFO in the background while your app needs to manage the job of delivering data to the task buffer fast enough that the device's FIFO never runs dry.

 

DAQmx in general is *very* good at its part of the job.   At least on most devices I'm familiar with, the default behavior is to try to keep the device's FIFO full.  But there's an advanced DAQmx Channel property known as the "data transfer request condition" that lets you select a behavior to deliver data to the device when the FIFO is (virtually) empty.  That can help reduce latency on the device side.

 

However, I notice your device is spec'ed for a 64k sample FIFO (shared among all channels, but you only have 1).  So it doesn't seem like that FIFO is getting filled up or you'd be waiting for more like 2 minutes for a 500 Hz sample rate to work through that size buffer.

 

Which brings us to your task buffer.  It's 1000 samples long b/c that's the amount of data you wrote to the task before starting it.  So that represents 2 seconds worth of task buffer.

 

In your loop, you repeatedly write 100 samples to your AO task and read 100 from your AI task.  Both sample at the same 500 Hz rate* (nominally at least), so every 0.2 seconds, you deliver a new 0.2 seconds worth of data to the AO task.  It takes its place in line behind the data you already wrote but is still stuck working through the FIFO.

    That line is 2 seconds long.  You'll have written 1100 samples before you read your first 100.  Then your loop iterates, 100 AO samples will have been *generated* while you waited for 100 AI to be acquired, so there are still 1000 samples in the AO FIFO when you write samples #1100-1200 to the task.   And so on.  There's always a line of 1000 samples ahead of you, so it takes 2 seconds to see your changes.

 

 

-Kevin P

 

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
Message 5 of 8
(2,280 Views)

Second answer:

 

There's a number of other things you should probably do.

 

1. Wire up your errors!  And pay attention to them in your loop!  You should very likely terminate the loop if either task throws an error.

 

2. Share a sample clock between the tasks.  Letting the 2 distinct boards generate their own sample clocks pretty much guarantees that they won't stay perfectly in sync over the long run.  Many common boards are spec'ed with absolute time accuracy of ~50 ppm.  That works out to about 3 msec per minute of acquisition time.

    If you share a sample clock, both tasks use the same signal to drive their sample timing, so they won't get out of sync.

 

3. Give more thought to your plan as it relates to latency & buffering.  I don't know exactly what you need to do, so I can't give super specific advice on how to approach getting there.  In general, hardware-clocked timing goes hand-in-hand with buffering, and buffering goes hand-in-hand with unavoidable finite latency.

 

I may have time later today to look up and point you toward some examples.

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 6 of 8
(2,276 Views)

@Kevin_Price wrote:

In your loop, you repeatedly write 100 samples to your AO task and read 100 from your AI task.  Both sample at the same 500 Hz rate* (nominally at least), so every 0.2 seconds, you deliver a new 0.2 seconds worth of data to the AO task.  It takes its place in line behind the data you already wrote but is still stuck working through the FIFO.

    That line is 2 seconds long.  You'll have written 1100 samples before you read your first 100.  Then your loop iterates, 100 AO samples will have been *generated* while you waited for 100 AI to be acquired, so there are still 1000 samples in the AO FIFO when you write samples #1100-1200 to the task.   And so on.  There's always a line of 1000 samples ahead of you, so it takes 2 seconds to see your changes.

 

 

-Kevin P

 


Yep, you're absolutely right. I didn't see that I create a task buffer of 1000 instead of 100! Setting that to 100 reduces the delay a lot. Still some delay, but the timing isn't very important in this program.

 

@Kevin_Price wrote:

Second answer:

 

There's a number of other things you should probably do.

 

1. Wire up your errors!  And pay attention to them in your loop!  You should very likely terminate the loop if either task throws an error.


I know, this is just a first prototype to test whether my idea workes or not. The details like error handling, documentation and UI will come later.

 

2. Share a sample clock between the tasks.  Letting the 2 distinct boards generate their own sample clocks pretty much guarantees that they won't stay perfectly in sync over the long run.  Many common boards are spec'ed with absolute time accuracy of ~50 ppm.  That works out to about 3 msec per minute of acquisition time.

    If you share a sample clock, both tasks use the same signal to drive their sample timing, so they won't get out of sync


Good suggestion!

 


3. Give more thought to your plan as it relates to latency & buffering.  I don't know exactly what you need to do, so I can't give super specific advice on how to approach getting there.  In general, hardware-clocked timing goes hand-in-hand with buffering, and buffering goes hand-in-hand with unavoidable finite latency.

 

I may have time later today to look up and point you toward some examples.

So basically I'm implementing a perturb and observe algorithm with higher presicion than the standard 3 points. On photovoltaic devices, applying a certain bias voltage will result in the highest output power. Increasing or decreasing the voltage from this point will only reduce power. This algorithm is a simple one to find this power. You apply a voltage and some voltage points around this point and measure where the power is highest, in the next iteration you use this new voltage to essentially track the maximum power over time.

 

This is generally slow process, so timing is super important. Of course, the lower latancy you have, the better. So if you can provide additional examples, espacially with DAQmx, that would be great!

 

Thanks a lot.

 

 

 

-Kevin P

0 Kudos
Message 7 of 8
(2,245 Views)

This seems to be a pretty discrete process and you're using a device that will maintain the most recent AO voltage output after you stop the task.   So......

 

Maybe you should do this as a series of finite acquisition tasks.  In each loop iteration, you start, write / read, and stop both the AO and AI tasks.  You'll want them sync'ed, but you'll probably also want the AI slightly delayed to make sure the AO and your system are producing a final stable result before you sample.

 

You'll have to figure out how long your system needs to stabilize -- that'll determine the needed delay and the corresponding max reasonable sample rate.  I would normally do this by running a counter task that both AO and AI use as their sample clock, and whose freq and pulse width are under my control.  The trick is that AO generates a sample on the leading edge while AI starts it's sampling on the trailing edge.  And then you *might* need to use special DAQmx properties to speed up the AI convert clock to make sure all multiplexed channels get sampled before the next AO sample.

 

For example, let's suppose your system is sure to show stable response 1.5 msec after the stimulus voltage changes.  That means we'll want our pulse train to have a high time 1.5 msec, and we'll have AO generate a sample on the rising edge while AI starts its sampling on the falling edge.

    Now let's further suppose you have 2 channels to sample and you want a 500 Hz sample rate.  Because we're waiting for 1.5 msec to settle, we need to sample both channels in the remaining 0.5 msec.  The AI Convert Clock controls the multiplexer and there's a little extra timing overhead beyond the simple straight math of 2 cycles at the convert clock rate.  It's generally not more than 10's of microsec IIRC, but it's not quite 0.  (And it *is* defined if you dig deep enough in various manuals and docs.)   So we need to be sure that 2 cycles of the AI Convert Clock will fit in something a bit less than 0.5 msec.  0.4 msec should be safe, so 0.2 msec per channel for a rate of 5000 Hz.

 

All this timing stuff can be figured out offline, and then programmed just once before the loop starts.  Inside the loop you only need to repeatedly start, write/read, and stop.  [Note: actually, if you follow my counter suggestion, only the counter task needs to be finite and start & stop inside the loop.  The AO and AI could be continuous and start before the loop.  For a counter that generates N pulses, you just write N AO samples before starting the counter and read N AI samples after.]

 

 

-Kevin P

ALERT! LabVIEW's subscription-only policy coming to an end (finally!). Permanent license pricing remains WIP. Tread carefully.
0 Kudos
Message 8 of 8
(2,180 Views)